You're rigth. If multiple threads are scheduling jobs and the pool has reached 
a threadcount of maxsize-1, chances are that a RejectedExecutionException
is thrown. The offer-method should use a lock. Like so:

   public boolean offer(Runnable inJob) {
    _lock.lock();
     try { 
       if (_executor.getPoolSize() < _executor.getMaximumPoolSize()) {
         return false;
       }
       return super.offer(inJob);
      } finally {
        _lock.unlock();
      }
   }

--Arjan Verstoep

 
On 2 Jul 2014, at 09:52, Lance Java wrote:

> I'm not convinced this solution is thread-safe.
> On 1 Jul 2014 17:21, "Arjan Verstoep" <a.j.verst...@alumnus.utwente.nl>
> wrote:
> 
>> Yes, the default implementation is very counter-intuitive. I have
>> invesigated once, and it was possible to 'repair' the ThreadPoolExecutor if
>> you provide the 'right' work-queue.
>> 
>> I don't know if you can apply this to Tapestry's ThreadPoolExecutor, but
>> anyhow: here's how I do it:
>> 
>> --Arjan Verstoep
>> 
>> /**
>>   * Specific implementation of the {@link LinkedBlockingQueue} that won't
>>   * accept offered elements as long as the given {@link
>> ThreadPoolExecutor} has
>>   * not yet reached its maximum capacity.
>>   * <p>The intended effect is to have a ThreadPoolExecutor that will only
>> start
>>   * adding Runnables to the Queue <i>after</i> the maximum number of
>> Threads is
>>   * reached. The default implementation of the ThreadPoolExecutor is,
>> that it
>>   * first starts new Threads when the Queue is full, which with an
>> unbounded
>>   * Queue will <i>never</i> be the case.</p>
>>   * <p>This class will throw an
>>   * unchecked {@link java.lang.NullPointerException} if the
>> ThreadPoolExecutor
>>   * is not set with {@link #setExecutor(ThreadPoolExecutor)} !</p>
>>   */
>> class ThreadCreationTriggeringLinkedBlockingQueue<T> extends
>> LinkedBlockingQueue<Runnable> {
>>    private static final long serialVersionUID = 1L;
>>    private transient ThreadPoolExecutor _executor;
>> 
>>    /**
>>     * <p>If the Executor's current pool size is below the maximum pool
>> size,
>>     * this method will not add the {@link Runnable} to the Queue.<br/>
>>     * The {@link ThreadPoolExecutor} will then be forced to start a new
>> Thread.
>>     * </p>{@inheritDoc}
>>     */
>>    @Override
>>    public boolean offer(Runnable inJob) {
>>      if (_executor.getPoolSize() < _executor.getMaximumPoolSize()) {
>>        return false;
>>      }
>>      return super.offer(inJob);
>>    }
>> 
>>    /**
>>     * Sets the executor which will be used to decide whether or not to
>> add the
>>     * Runnable to the Queue.
>>     *
>>     * @param inExecutor the ThreadPoolExecutor
>>     */
>>    public void setExecutor(ThreadPoolExecutor inExecutor) {
>>      _executor = inExecutor;
>>    }
>>  }
>> 
>> ThreadCreationTriggeringLinkedBlockingQueue<Runnable> queue = new
>> ThreadCreationTriggeringLinkedBlockingQueue<Runnable>();
>> ThreadPoolExecutor  threadpoolExecutor = new
>> ThreadPoolExecutor(availableProcessors(), THREADPOOLMULTIPLICATOR *
>> availableProcessors(), 10, TimeUnit.MINUTES, queue);
>> queue.setExecutor(threadpoolExecutor);
>> 
>> 
>> On 1 Jul 2014, at 16:04, Lance Java wrote:
>> 
>>> I feel the implementation is flawed.
>>> 
>>> I would prefer that the pool size would increase from minimum to maximum
>>> under load to increase throughput and then revert back to the minimum
>> when
>>> not under load.
>>> 
>>> But instead, the pool size stays at minimum until the queue is full. In
>> my
>>> opinion, the application is about to fall over at this point. Only then
>>> will the pool size increase which i think is crazy / stupid.
>>> 
>>> Because of this, I've always seen core pool size and max pool size set to
>>> the same value.
>>> 
>>> To achieve my preferred behaviour (above) I think you can specify
>> different
>>> pooling behaviour. I think it required throwing exceptions and extending
>>> ThreadPoolExecutor or something nasty like that.
>>> On 1 Jul 2014 12:14, "D Tim Cummings" <t...@triptera.com.au> wrote:
>>> 
>>>> Ok, thanks for explaining this... I think :)
>>>> 
>>>> I had a read of the ThreadPoolExecutor javadoc. It seems to me that
>>>> core-pool-size should be described as the maximum pool size before
>> queuing.
>>>> Describing it as the minimum pool size is misleading because there is no
>>>> minimum. The number of threads in the pool before any have been invoked
>> is
>>>> always zero.
>>>> 
>>>> Tim
>>>> 
>>>> On 1 Jul 2014, at 16:25, Lance Java <lance.j...@googlemail.com> wrote:
>>>> 
>>>>> If you read the javadoc for java.util.concurrent ThreadPoolExecutor
>>>> you'll
>>>>> see that the number of threads will only increase when the queue has
>>>>> reached its capacity. Crazy / stupid behaviour if you ask me... But
>>>>> expected.
>>>>> 
>>>>> I've been caught out by this before when I set core pool size to 1,
>>>>> expecting the thread size to increase under load.
>>>>> On 1 Jul 2014 02:53, "D Tim Cummings" <t...@triptera.com.au> wrote:
>>>>> 
>>>>>> Hi
>>>>>> 
>>>>>> I am using Tapestry 5.3.7 and the ParallelExecutor. When I increase
>> the
>>>>>> max-pool-size I am not able to use any more threads. However when I
>>>>>> increase the core-pool-size I am able to use more threads. It looks
>> like
>>>>>> the max-pool-size is not doing anything (or I am not understanding
>> what
>>>> it
>>>>>> is supposed to do).
>>>>>> 
>>>>>> When I use the defaults I get a maximum of 3 threads and other threads
>>>>>> seem to wait in a queue and execute later.
>>>>>> 
>>>>>> When I use the following settings in web.xml I still get only 3
>> threads
>>>>>> 
>>>>>> <context-param>
>>>>>> <param-name>tapestry.thread-pool.core-pool-size </param-name>
>>>>>> <param-value>3</param-value>
>>>>>> </context-param>
>>>>>> <context-param>
>>>>>> <param-name>tapestry.thread-pool.max-pool-size</param-name>
>>>>>> <param-value>20</param-value>
>>>>>> </context-param>
>>>>>> 
>>>>>> When I use the following settings in web.xml I get 3 threads
>>>>>> 
>>>>>> <context-param>
>>>>>> <param-name>tapestry.thread-pool.core-pool-size </param-name>
>>>>>> <param-value>3</param-value>
>>>>>> </context-param>
>>>>>> <context-param>
>>>>>> <param-name>tapestry.thread-pool.max-pool-size</param-name>
>>>>>> <param-value>200</param-value>
>>>>>> </context-param>
>>>>>> 
>>>>>> When I use the following settings in web.xml I get 20 threads
>>>>>> 
>>>>>> <context-param>
>>>>>> <param-name>tapestry.thread-pool.core-pool-size </param-name>
>>>>>> <param-value>20</param-value>
>>>>>> </context-param>
>>>>>> <context-param>
>>>>>> <param-name>tapestry.thread-pool.max-pool-size</param-name>
>>>>>> <param-value>20</param-value>
>>>>>> </context-param>
>>>>>> 
>>>>>> I noticed that the documentation says the default max-pool-size is 20
>> (
>>>>>> http://tapestry.apache.org/parallel-execution.html ) while the API
>> docs
>>>>>> say the default is 10 (
>>>>>> 
>>>> 
>> http://tapestry.apache.org/current/apidocs/org/apache/tapestry5/ioc/IOCSymbols.html
>>>>>> ).
>>>>>> 
>>>>>> I am invoking my threads from a service which contains
>>>>>> 
>>>>>> @Inject
>>>>>> private ParallelExecutor executor;
>>>>>> 
>>>>>> @Override
>>>>>> public void doStartThread() {
>>>>>>  if ( myFuture == null || myFuture.isDone() ) {
>>>>>>    myFuture = executor.invoke(new MyInvokable());
>>>>>>  }
>>>>>> }
>>>>>> 
>>>>>> I have tried in Jetty 8.1.2 (Run Jetty Run in Eclipse) and Tomcat
>>>> 7.0.52.
>>>>>> 
>>>>>> Cheers
>>>>>> 
>>>>>> Tim
>>>>>> 
>>>> 
>>>> 
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
>> For additional commands, e-mail: users-h...@tapestry.apache.org
>> 
>> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org

Reply via email to