In every minor compaction job of HBase,

org.apache.phoenix.schema.stats.DefaultStatisticsCollector.initGuidePostDepth()
is called,

and SYSTEM.CATALOG table is open to get guidepost width via

htable = env.getTable(

                    SchemaUtil.*getPhysicalTableName*
(PhoenixDatabaseMetaData.*SYSTEM_CATALOG_NAME_BYTES*, env
.getConfiguration()));

This function call creates one zookeeper connection to get cluster id.

DefaultStatisticsCollector doesn't close this zookeeper connection
immediately after get guidepost width, and the zookeeper connection remains
alive until HRegion is closed.

This is not a problem with small number of Regions, but when number of
Region is large and upsert operation is frequent, the number of zookeeper
connection gradually increases  to hundreds, and the zookeeper server nodes
experience  short of available TCP/IP ports.

I think this zookeeper connection should be closed immediately after get
guidepost width.

My Phoenix environment is,

- HBase : v1.1.8

- Phoenix : v4.9.0-HBase-1.1

- 80 nodes of hbase

The following is the Dump stack of zookeeper connection creation


       at java.lang.Thread.dumpStack(Thread.java:1329)

        at
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.<init>(ZooKeeperKeepAliveConnection.java:48)

        at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1663)

        at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104)

        at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:885)

        at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:638)

        at
org.apache.hadoop.hbase.client.CoprocessorHConnection.<init>(CoprocessorHConnection.java:99)

        at
org.apache.hadoop.hbase.client.CoprocessorHConnection.<init>(CoprocessorHConnection.java:89)

        at
org.apache.hadoop.hbase.client.CoprocessorHConnection.getConnectionForEnvironment(CoprocessorHConnection.java:61)

        at
org.apache.hadoop.hbase.client.HTableWrapper.createWrapper(HTableWrapper.java:69)

        at
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.getTable(CoprocessorHost.java:514)

        at
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$Environment.getTable(CoprocessorHost.java:503)

        at
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.initGuidepostDepth(DefaultStatisticsCollector.java:123)

        at
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:299)

        at
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.getInternalScanner(DefaultStatisticsCollector.java:293)

        at
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.createCompactionScanner(DefaultStatisticsCollector.java:285)

        at
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$2.run(UngroupedAggregateRegionObserver.java:692)

        at
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$2.run(UngroupedAggregateRegionObserver.java:681)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:422)

        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)

        at
org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:448)

        at
org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:429)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39)

        at
org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:205)

        at
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:681)

        at
org.apache.hadoop.hbase.coprocessor.BaseRegionObserver.preCompact(BaseRegionObserver.java:195)

        at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$9.call(RegionCoprocessorHost.java:612)

        at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1708)

        at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1785)

        at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1757)

        at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:607)

        at
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCreateCoprocScanner(Compactor.java:247)

        at
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:91)

        at
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:119)

        at
org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1221)

        at
org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1845)

        at
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:521)

        at
org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:557)

        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)

Reply via email to