[ 
https://issues.apache.org/jira/browse/HBASE-3553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling resolved HBASE-3553.
----------------------------------

      Resolution: Fixed
    Hadoop Flags: [Reviewed]

Committed to 0.90 branch and trunk.  Thanks for the patch, Himanshu!

> number of active threads in HTable's ThreadPoolExecutor
> -------------------------------------------------------
>
>                 Key: HBASE-3553
>                 URL: https://issues.apache.org/jira/browse/HBASE-3553
>             Project: HBase
>          Issue Type: Improvement
>          Components: client
>    Affects Versions: 0.90.1
>            Reporter: Himanshu Vashishtha
>             Fix For: 0.90.2
>
>         Attachments: HBASE-3553_final.patch, ThreadPoolTester.java, 
> benchmark_results.txt
>
>
> Using a ThreadPoolExecutor with corePoolSize = 0 and using 
> LinkedBlockingQueue as the collection to hold incoming runnable tasks seems 
> to be having the effect of running only 1 thread, irrespective of the 
> maxpoolsize set by reading the property hbase.htable.threads.max (or number 
> of RS). (This is what I infer from reading source code of ThreadPoolExecutor 
> class in 1.6)
> On a 3 node ec2 cluster, a full table scan with approx 9m rows results in 
> almost similar timing with a sequential scanner (240 secs) and scanning with 
> a Coprocessor (230 secs), that uses HTable's pool to  submit callable objects 
> for each region. 
> I try to come up with a test class that creates a similar threadpool, and 
> test that whether the pool size ever grows beyond 1. It also confirms that it 
> remains 1 though it executed 100 requests.
> It seems the desired behavior was to release all resources when the client is 
> done reading, but this can be achieved by setting allowCoreThreadTimeOut to 
> true (after setting a +ve corePoolSize).

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to