My earlier mail said something wrong, or at least misleading: > ...defaulting coreThreads=1 and maxThreads=255 with a SynchronousQueue > seems like it's asking for trouble* with CPU count << 255.*
I shouldn't have included that last italicized phrase "with CPU count << 255". The point was that SynchronousQueues should have unbounded pool size. Jerome's response of setting maxPoolSize to 10 by default (and still using SynchronousQueue) means that tasks will be rejected that much sooner, which will probably cause more problems for people than a value of 255. The thing about a SynchronousQueue is that it isn't really a queue -- it has zero capacity. Putting something on a synchronous queue blocks until there's something (i.e., a thread) at the other end to hand it off to directly. In development or for small applications where you aren't too worried about exhausting thread resources, this is fine. In production systems, though, you want to be able to configure something other than direct handoff. Here is the relevant section from the TPE javadoc<http://java.sun.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html> : --- Any BlockingQueue<http://java.sun.com/javase/6/docs/api/java/util/concurrent/BlockingQueue.html> may be used to transfer and hold submitted tasks. The use of this queue interacts with pool sizing: - If fewer than corePoolSize threads are running, the Executor always prefers adding a new thread rather than queuing. - If corePoolSize or more threads are running, the Executor always prefers queuing a request rather than adding a new thread. - If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected. There are three general strategies for queuing: 1. *Direct handoffs.* A good default choice for a work queue is a SynchronousQueue<http://java.sun.com/javase/6/docs/api/java/util/concurrent/SynchronousQueue.html> that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed. 2. *Unbounded queues.* Using an unbounded queue (for example a LinkedBlockingQueue<http://java.sun.com/javase/6/docs/api/java/util/concurrent/LinkedBlockingQueue.html> without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed. 3. *Bounded queues.* A bounded queue (for example, an ArrayBlockingQueue<http://java.sun.com/javase/6/docs/api/java/util/concurrent/ArrayBlockingQueue.html>) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput. --- (Tim writing again:) In summary: - SynchronousQueues should use unbounded max pool size, risk unbounded thread pool growth. - Unbounded work queues *ignore* max pool size, risk unbounded work queue growth. - With bounded work queues there are ways to go: 1. Large queues/small pools, risk artificially low throughput when many tasks are I/O bound. 2. Small queues/large pools, risk high scheduling overhead, decreased throughput. If tasks are interdependent, you want to avoid long queues and small pools, because of the risk that a task will get stuck behind a task that depends on it. So I think the safest default is Executors.newCachedThreadPool, as long as there's a way to provide a different ExecutorService instance for BaseHelper to use. How many BaseHelper instances are there, typically? The other point I made was that you'd get better utilization if all BaseHelpers just used the same thread pool, instead of one pool per BaseHelper. And then there's my minor nit about the workerService field. --tim On Fri, Jul 2, 2010 at 12:23 PM, Tal Liron <tal.li...@threecrickets.com>wrote: > As long as you're part of the decision-making process for Restlet, > then I'm OK with it. > > > The caveat is that people don't always understand how to use the > > configuration parameters of ThreadPoolExecutor. There was an exchange > > on the concurrency-interest mailing list recently that brought this > > home to me. For example, it seems that a lot of people think of > > corePoolSize as minPoolSize, the opposite of maxPoolSize, which is the > > wrong way to think about it. A conservative default in Restlet is > > probably better than a user configuration based on a misunderstanding. > > > > --tim > > ------------------------------------------------------ > > http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447&dsMessageId=2628698 > ------------------------------------------------------ http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447&dsMessageId=2629090