[ 
https://issues.apache.org/jira/browse/HBASE-20226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16406488#comment-16406488
 ] 

Ted Yu commented on HBASE-20226:
--------------------------------

{code}
+    if (v1Regions.size() > 0 || v2Regions.size() > 0) {
{code}
I think you may tighten the above condition by checking the sum of the sizes.
{code}
+      ThreadPoolExecutor tpoolDelete = 
createExecutor("SnapshotRegionManifestDeletePool");
{code}
where:
{code}
  public static ThreadPoolExecutor createExecutor(final Configuration conf, 
final String name) {
    int maxThreads = conf.getInt("hbase.snapshot.thread.pool.max", 8);
{code}
You can add new config, instead of depending on the existing config above.

> Performance Improvement Taking Large Snapshots In Remote Filesystems
> --------------------------------------------------------------------
>
>                 Key: HBASE-20226
>                 URL: https://issues.apache.org/jira/browse/HBASE-20226
>             Project: HBase
>          Issue Type: Improvement
>          Components: snapshots
>    Affects Versions: 1.4.0
>         Environment: HBase 1.4.0 running on an AWS EMR cluster with the 
> hbase.rootdir set to point to a folder in S3 
>            Reporter: Saad Mufti
>            Priority: Minor
>         Attachments: HBASE-20226..01.patch
>
>
> When taking a snapshot of any table, one of the last steps is to delete the 
> region manifests, which have already been rolled up into a larger overall 
> manifest and thus have redundant information.
> This proposal is to do the deletion in a thread pool bounded by 
> hbase.snapshot.thread.pool.max . For large tables with a lot of regions, the 
> current single threaded deletion is taking longer than all the rest of the 
> snapshot tasks when the Hbase data and the snapshot folder are both in a 
> remote filesystem like S3.
> I have a patch for this proposal almost ready and will submit it tomorrow for 
> feedback, although I haven't had a chance to write any tests yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to