[ 
https://issues.apache.org/jira/browse/OAK-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155206#comment-16155206
 ] 

Chetan Mehrotra edited comment on OAK-6622 at 9/6/17 11:23 AM:
---------------------------------------------------------------

Done this by setting corePoolSize to maxPoolSize with 1807468, 1807469. We 
already have corePoolTimeout set so threads would get remove if idle for 60 secs

This would ensure that any bad job does not bring down other parts like OAK-6619


was (Author: chetanm):
Done this with 1807468, 1807469

> Configure default core pool size for thread pool used by oak-lucene
> -------------------------------------------------------------------
>
>                 Key: OAK-6622
>                 URL: https://issues.apache.org/jira/browse/OAK-6622
>             Project: Jackrabbit Oak
>          Issue Type: Improvement
>          Components: lucene
>            Reporter: Chetan Mehrotra
>            Assignee: Chetan Mehrotra
>             Fix For: 1.8, 1.7.7
>
>
> {{LuceneIndexProviderService}} currently configures a thread pool like below
> {code}
>  ThreadPoolExecutor executor = new ThreadPoolExecutor(
>           0,  //corePoolSize
>           5, //maxPoolSize
>          60L, 
>          TimeUnit.SECONDS,
>          new LinkedBlockingQueue<Runnable>(), //Unbounded queue
>          new ThreadFactory() {
> {code}
> Per 
> [ThreadPoolExecutor|https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html]
>  
> {quote}
> If there are more than corePoolSize but less than maximumPoolSize threads 
> running, a new thread will be created *only if the queue is full*
> {quote}
> Due to this currently the thread pool created by oak-lucene would only have 1 
> thread to handle all task as the queue is unbounded one. And if for some 
> reason this thread gets stuck (due to some lock) then it would prevent other 
> task in pool from further processing.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to