[jira] [Created] (HBASE-27445) result of DirectMemoryUtils#getDirectMemorySize may be wrong

2022-10-25 Thread ruanhui (Jira)
ruanhui created HBASE-27445:
---

 Summary: result of DirectMemoryUtils#getDirectMemorySize may be 
wrong
 Key: HBASE-27445
 URL: https://issues.apache.org/jira/browse/HBASE-27445
 Project: HBase
  Issue Type: Bug
  Components: UI
Affects Versions: 3.0.0-alpha-3
Reporter: ruanhui
Assignee: ruanhui
 Fix For: 3.0.0-alpha-4


If the parameter is set repeatedly, the latter will take effect. For example, 
if we set 

-Xms30g -Xmx30g -XX:MaxDirectMemorySize=40g -XX:MaxDirectMemorySize=50g

the MaxDirectMemorySize will be set as 50g.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27444) Add a tool command list_disabled_tables

2022-10-25 Thread LiangJun He (Jira)
LiangJun He created HBASE-27444:
---

 Summary: Add a tool command list_disabled_tables
 Key: HBASE-27444
 URL: https://issues.apache.org/jira/browse/HBASE-27444
 Project: HBase
  Issue Type: New Feature
  Components: master
Affects Versions: 3.0.0-alpha-4
Reporter: LiangJun He
Assignee: LiangJun He






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27443) Use java11 in the general check of our jenkins job

2022-10-25 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27443:
-

 Summary: Use java11 in the general check of our jenkins job
 Key: HBASE-27443
 URL: https://issues.apache.org/jira/browse/HBASE-27443
 Project: HBase
  Issue Type: Task
  Components: build, jenkins
Reporter: Duo Zhang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-25983) javadoc generation fails on openjdk-11.0.11+9

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-25983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-25983.
---
Fix Version/s: 2.6.0
   3.0.0-alpha-4
   2.5.2
   2.4.16
 Hadoop Flags: Reviewed
   Resolution: Fixed

Pushed to branch-2.4+.

Thanks [~ndimiduk] for reviewing and all for helping!

> javadoc generation fails on openjdk-11.0.11+9
> -
>
> Key: HBASE-25983
> URL: https://issues.apache.org/jira/browse/HBASE-25983
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, pom
>Affects Versions: 2.4.3
> Environment: maven - 3.5.4 and 3.6.2
> java - openjdk 11.0.11+9
> centos6
> hbase - 2.4.3
>Reporter: Bryan Beaudreault
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 2.6.0, 3.0.0-alpha-4, 2.5.2, 2.4.16
>
>
> I'm trying to build javadoc for HBase 2.4.3 on jdk11. The command I'm running 
> is as follows:
> {code:java}
> JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64/  mvn 
> -Phadoop-3.0 -Phadoop.profile=3.0 -Dhadoop-three.version=3.2.2 
> -Dhadoop.guava.version=27.0-jre -Dslf4j.version=1.7.25 
> -Djetty.version=9.3.29.v20201019 -Dzookeeper.version=3.5.7 -DskipTests 
> -Dcheckstyle.skip=true site{code}
> I've tried this with maven 3.5.4 and 3.6.2. Based on JAVA_HOME above, 
> jdk11.0.11+9.
> {{The error is as follows:}}
> {code:java}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.7.1:site (default-site) on 
> project hbase: Error generating maven-javadoc-plugin:3.2.0:aggregate-no-fork 
> report:
>  [ERROR] Exit code: 1 - javadoc: warning - The old Doclet and Taglet APIs in 
> the packages
>  [ERROR] com.sun.javadoc, com.sun.tools.doclets and their implementations
>  [ERROR] are planned to be removed in a future JDK release. These
>  [ERROR] components have been superseded by the new APIs in 
> jdk.javadoc.doclet.
>  [ERROR] Users are strongly recommended to migrate to the new APIs.
>  [ERROR] javadoc: error - invalid flag: -author
>  [ERROR]
>  [ERROR] Command line was: 
> /usr/lib/jvm/java-11-openjdk-11.0.11.0.9-2.el8_4.x86_64/bin/javadoc -J-Xmx2G 
> @options @packages
>  [ERROR]
>  [ERROR] Refer to the generated Javadoc files in 
> '/hbase/rpm/build/BUILD/hbase-2.4.3/target/site/apidocs' dir.
>  [ERROR] -> [Help 1]
>  [ERROR]
>  [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
>  [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>  [ERROR]
>  [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
>  [ERROR] [Help 1]{code}
> I believe this is due to the yetus doclet 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet. 
> Commenting this doclet out from the userapi and testuserapi reportSets in 
> pom.xml fixes the build.
>  
>  I noticed hbase 2.4.3 depends on audience-annotations 0.5.0, which is very 
> old. I tried updating to 0.13.0, but that did not help. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27091) Speed up the loading of table descriptor from filesystem

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27091.
---
Resolution: Fixed

> Speed up the loading of table descriptor from filesystem
> 
>
> Key: HBASE-27091
> URL: https://issues.apache.org/jira/browse/HBASE-27091
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0-alpha-3
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Minor
> Fix For: 3.0.0-alpha-4
>
>
> If there are a large number of tables in the HBase cluster, it will take a 
> long time to fully load the table descriptor  from filesystem for the first 
> time.
> In our production cluster, there were 5 + tables. It took several minutes 
> to load the table descriptor from filesystem. This problem seriously affects 
> the performance of HMaster active/standby switchover.
> We should support concurrent loading to solve this problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (HBASE-27091) Speed up the loading of table descriptor from filesystem

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-27091:
---

> Speed up the loading of table descriptor from filesystem
> 
>
> Key: HBASE-27091
> URL: https://issues.apache.org/jira/browse/HBASE-27091
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0-alpha-3
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Minor
> Fix For: 3.0.0-alpha-4
>
>
> If there are a large number of tables in the HBase cluster, it will take a 
> long time to fully load the table descriptor  from filesystem for the first 
> time.
> In our production cluster, there were 5 + tables. It took several minutes 
> to load the table descriptor from filesystem. This problem seriously affects 
> the performance of HMaster active/standby switchover.
> We should support concurrent loading to solve this problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (HBASE-26976) Update related comments after HMaster can load the live RS infos from local region

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-26976:
---

> Update related comments after HMaster can load the live RS infos from local 
> region
> --
>
> Key: HBASE-26976
> URL: https://issues.apache.org/jira/browse/HBASE-26976
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0-alpha-3
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Minor
> Fix For: 3.0.0-alpha-3
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-26976) Update related comments after HMaster can load the live RS infos from local region

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-26976.
---
Resolution: Fixed

> Update related comments after HMaster can load the live RS infos from local 
> region
> --
>
> Key: HBASE-26976
> URL: https://issues.apache.org/jira/browse/HBASE-26976
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Affects Versions: 3.0.0-alpha-3
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Minor
> Fix For: 3.0.0-alpha-3
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-26898) Cannot rebuild a cluster from an existing root directory

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-26898.
---
Resolution: Fixed

> Cannot rebuild a cluster from an existing root directory
> 
>
> Key: HBASE-26898
> URL: https://issues.apache.org/jira/browse/HBASE-26898
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 3.0.0-alpha-2
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> When I tested to rebuild an HBase cluster, and the rootdir was configured as 
> a existed directory (the directory was generated by another HBase cluster of 
> the same version), I found the following error message:
> {code:java}
> java.net.UnknownHostException: Call to address=worker-1.cluster-xxx:16020 
> failed on local exception: java.net.UnknownHostException: 
> worker-1.cluster-xxx:16020 could not be resolved
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:234)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:93)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:424)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:419)
>     at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:119)
>     at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:134)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.lambda$sendRequest$4(NettyRpcConnection.java:351)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
>     at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.UnknownHostException: worker-1.cluster-xxx:16020 could 
> not be resolved
>     at 
> org.apache.hadoop.hbase.ipc.RpcConnection.getRemoteInetAddress(RpcConnection.java:192)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.connect(NettyRpcConnection.java:275)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$800(NettyRpcConnection.java:78)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$4.run(NettyRpcConnection.java:325)
>     at 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl.notifyOnCancel(HBaseRpcControllerImpl.java:262)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.sendRequest0(NettyRpcConnection.java:308)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.lambda$sendRequest$4(NettyRpcConnection.java:349)
>  {code}
> Eventually, I fail to create the cluster.
> But for cloud environments, this operation is a common scenario(Rebuild a 
> cluster from an existing rootdir directory)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (HBASE-26898) Cannot rebuild a cluster from an existing root directory

2022-10-25 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-26898:
---

> Cannot rebuild a cluster from an existing root directory
> 
>
> Key: HBASE-26898
> URL: https://issues.apache.org/jira/browse/HBASE-26898
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 3.0.0-alpha-2
>Reporter: LiangJun He
>Assignee: LiangJun He
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> When I tested to rebuild an HBase cluster, and the rootdir was configured as 
> a existed directory (the directory was generated by another HBase cluster of 
> the same version), I found the following error message:
> {code:java}
> java.net.UnknownHostException: Call to address=worker-1.cluster-xxx:16020 
> failed on local exception: java.net.UnknownHostException: 
> worker-1.cluster-xxx:16020 could not be resolved
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:234)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:387)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:93)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:424)
>     at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:419)
>     at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:119)
>     at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:134)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.lambda$sendRequest$4(NettyRpcConnection.java:351)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
>     at 
> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:384)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>     at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.UnknownHostException: worker-1.cluster-xxx:16020 could 
> not be resolved
>     at 
> org.apache.hadoop.hbase.ipc.RpcConnection.getRemoteInetAddress(RpcConnection.java:192)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.connect(NettyRpcConnection.java:275)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$800(NettyRpcConnection.java:78)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$4.run(NettyRpcConnection.java:325)
>     at 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl.notifyOnCancel(HBaseRpcControllerImpl.java:262)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.sendRequest0(NettyRpcConnection.java:308)
>     at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.lambda$sendRequest$4(NettyRpcConnection.java:349)
>  {code}
> Eventually, I fail to create the cluster.
> But for cloud environments, this operation is a common scenario(Rebuild a 
> cluster from an existing rootdir directory)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)