Hi Tim,

 

In the upcoming HTTP/NIO internal connectors for version 2.1, I’ve made the 
thread pool fully customizable. See the org.restlet.engine.nio. BaseHelper 
class for more details. Currently in Restlet incubator but soon to be moved to 
SVN trunk.

 

* <td>controllerDaemon</td>

* <td>boolean</td>

* <td>true (client), false (server)</td>

* <td>Indicates if the controller thread should be a daemon (not blocking JVM

* exit).</td>

 

* <td>controllerSleepTimeMs</td>

* <td>int</td>

* <td>50</td>

* <td>Time for the controller thread to sleep between each control.</td>

 

* <td>minThreads</td>

* <td>int</td>

* <td>5</td>

* <td>Minimum number of worker threads waiting to service calls, even if they

* are idle.</td>

 

* <td>lowThreads</td>

* <td>int</td>

* <td>8</td>

* <td>Number of worker threads determining when the connector is considered

* overloaded. This triggers some protection actions such as not accepting new

* connections.</td>

 

* <td>maxThreads</td>

* <td>int</td>

* <td>10</td>

* <td>Maximum number of worker threads that can service calls. If this number

* is reached then additional calls are queued if the "maxQueued" value hasn't

* been reached.</td>

 

* <td>maxQueued</td>

* <td>int</td>

* <td>10</td>

* <td>Maximum number of calls that can be queued if there aren't any worker

* thread available to service them. If the value is '0', then no queue is used

* and calls are rejected. If the value is '-1', then an unbounded queue is used

* and calls are never rejected.</td>

 

* <td>maxIoIdleTimeMs</td>

* <td>int</td>

* <td>30000</td>

* <td>Maximum time to wait on an idle IO operation.</td>

 

* <td>maxThreadIdleTimeMs</td>

* <td>int</td>

* <td>60000</td>

* <td>Time for an idle thread to wait for an operation before being 
collected.</td>

 

* <td>tracing</td>

* <td>boolean</td>

* <td>false</td>

* <td>Indicates if all messages should be printed on the standard console.</td>

 

* <td>workerThreads</td>

* <td>boolean</td>

* <td>true</td>

* <td>Indicates if the processing of calls should be done via threads provided

* by a worker service (i.e. a pool of worker threads). Note that if set to

* false, calls will be processed a single IO selector thread, which should

* never block, otherwise the other connections would hang.</td>

 

* <td>inboundBufferSize</td>

* <td>int</td>

* <td>8*1024</td>

* <td>Size of the content buffer for receiving messages.</td>

 

* <td>outboundBufferSize</td>

* <td>int</td>

* <td>32*1024</td>

* <td>Size of the content buffer for sending messages.</td>

 

* <td>directBuffers</td>

* <td>boolean</td>

* <td>true</td>

* <td>Indicates if direct NIO buffers should be allocated instead of regular

* buffers. See NIO's ByteBuffer Javadocs.</td>

 

* <td>transport</td>

* <td>String</td>

* <td>TCP</td>

* <td>Indicates the transport protocol such as TCP or UDP.</td>

 

 

Best regards,
Jerome
--
Restlet ~ Founder and Technical Lead ~  <http://www.restlet.org/> 
http://www.restlet.o​rg
Noelios Technologies ~  <http://www.noelios.com/> http://www.noelios.com

 

 

 

 

De : tpeie...@gmail.com [mailto:tpeie...@gmail.com] De la part de Tim Peierls
Envoyé : samedi 3 juillet 2010 19:15
À : discuss@restlet.tigris.org
Objet : Re: ClientResource leaves inactive thread

 

My earlier mail said something wrong, or at least misleading: 

...defaulting coreThreads=1 and maxThreads=255 with a SynchronousQueue seems 
like it's asking for trouble with CPU count << 255. 

 

I shouldn't have included that last italicized phrase "with CPU count << 255". 
The point was that SynchronousQueues should have unbounded pool size. 

 

Jerome's response of setting maxPoolSize to 10 by default (and still using 
SynchronousQueue) means that tasks will be rejected that much sooner, which 
will probably cause more problems for people than a value of 255.

 

The thing about a SynchronousQueue is that it isn't really a queue -- it has 
zero capacity. Putting something on a synchronous queue blocks until there's 
something (i.e., a thread) at the other end to hand it off to directly. In 
development or for small applications where you aren't too worried about 
exhausting thread resources, this is fine. In production systems, though, you 
want to be able to configure something other than direct handoff.

 

Here is the relevant section from the TPE javadoc 
<http://java.sun.com/javase/6/docs/api/java/util/concurrent/ThreadPoolExecutor.html>
 :

---

Any  
<http://java.sun.com/javase/6/docs/api/java/util/concurrent/BlockingQueue.html> 
BlockingQueue may be used to transfer and hold submitted tasks. The use of this 
queue interacts with pool sizing:

*       If fewer than corePoolSize threads are running, the Executor always 
prefers adding a new thread rather than queuing.
*       If corePoolSize or more threads are running, the Executor always 
prefers queuing a request rather than adding a new thread.
*       If a request cannot be queued, a new thread is created unless this 
would exceed maximumPoolSize, in which case, the task will be rejected.

There are three general strategies for queuing:

1.     Direct handoffs. A good default choice for a work queue is a  
<http://java.sun.com/javase/6/docs/api/java/util/concurrent/SynchronousQueue.html>
 SynchronousQueue that hands off tasks to threads without otherwise holding 
them. Here, an attempt to queue a task will fail if no threads are immediately 
available to run it, so a new thread will be constructed. This policy avoids 
lockups when handling sets of requests that might have internal dependencies. 
Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection 
of new submitted tasks. This in turn admits the possibility of unbounded thread 
growth when commands continue to arrive on average faster than they can be 
processed.

2.     Unbounded queues. Using an unbounded queue (for example a  
<http://java.sun.com/javase/6/docs/api/java/util/concurrent/LinkedBlockingQueue.html>
 LinkedBlockingQueue without a predefined capacity) will cause new tasks to 
wait in the queue when all corePoolSize threads are busy. Thus, no more than 
corePoolSize threads will ever be created. (And the value of the 
maximumPoolSize therefore doesn't have any effect.) This may be appropriate 
when each task is completely independent of others, so tasks cannot affect each 
others execution; for example, in a web page server. While this style of 
queuing can be useful in smoothing out transient bursts of requests, it admits 
the possibility of unbounded work queue growth when commands continue to arrive 
on average faster than they can be processed.

3.     Bounded queues. A bounded queue (for example, an  
<http://java.sun.com/javase/6/docs/api/java/util/concurrent/ArrayBlockingQueue.html>
 ArrayBlockingQueue) helps prevent resource exhaustion when used with finite 
maximumPoolSizes, but can be more difficult to tune and control. Queue sizes 
and maximum pool sizes may be traded off for each other: Using large queues and 
small pools minimizes CPU usage, OS resources, and context-switching overhead, 
but can lead to artificially low throughput. If tasks frequently block (for 
example if they are I/O bound), a system may be able to schedule time for more 
threads than you otherwise allow. Use of small queues generally requires larger 
pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling 
overhead, which also decreases throughput.

---

 

(Tim writing again:)

 

In summary:

*       SynchronousQueues should use unbounded max pool size, risk unbounded 
thread pool growth.
*       Unbounded work queues ignore max pool size, risk unbounded work queue 
growth.
*       With bounded work queues there are ways to go:

1.      Large queues/small pools, risk artificially low throughput when many 
tasks are I/O bound.
2.      Small queues/large pools, risk high scheduling overhead, decreased 
throughput.

If tasks are interdependent, you want to avoid long queues and small pools, 
because of the risk that a task will get stuck behind a task that depends on it.

 

So I think the safest default is Executors.newCachedThreadPool, as long as 
there's a way to provide a different ExecutorService instance for BaseHelper to 
use.

 

How many BaseHelper instances are there, typically? The other point I made was 
that you'd get better utilization if all BaseHelpers just used the same thread 
pool, instead of one pool per BaseHelper.

 

And then there's my minor nit about the workerService field.

 

--tim

 

On Fri, Jul 2, 2010 at 12:23 PM, Tal Liron <tal.li...@threecrickets.com> wrote:

As long as you're part of the decision-making process for Restlet,
then I'm OK with it.


> The caveat is that people don't always understand how to use the
> configuration parameters of ThreadPoolExecutor. There was an exchange
> on the concurrency-interest mailing list recently that brought this
> home to me. For example, it seems that a lot of people think of
> corePoolSize as minPoolSize, the opposite of maxPoolSize, which is the
> wrong way to think about it. A conservative default in Restlet is
> probably better than a user configuration based on a misunderstanding.
>
> --tim

------------------------------------------------------
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447 
<http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447&dsMessageId=2628698>
 &dsMessageId=2628698

------------------------------------------------------
http://restlet.tigris.org/ds/viewMessage.do?dsForumId=4447&dsMessageId=2659742

Reply via email to