[jira] [Commented] (HBASE-24102) RegionMover should exclude draining/decommissioning nodes from target RSs
[ https://issues.apache.org/jira/browse/HBASE-24102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142641#comment-17142641 ] Hudson commented on HBASE-24102: Results for branch branch-2.3 [build #150 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/150/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/150/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/150/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/150/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/150/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should exclude draining/decommissioning nodes from target RSs > - > > Key: HBASE-24102 > URL: https://issues.apache.org/jira/browse/HBASE-24102 > Project: HBase > Issue Type: Improvement >Reporter: Anoop Sam John >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.1.10, 2.2.5 > > > When using RegionMover tool to unload the regions from a given RS, it decides > the list of destination RSs by > {code} > List regionServers = new ArrayList<>(); > regionServers.addAll(admin.getRegionServers()); > // Remove the host Region server from target Region Servers list > ServerName server = stripServer(regionServers, hostname, port); > . > // Remove RS present in the exclude file > stripExcludes(regionServers); > stripMaster(regionServers); > {code} > Ya it is removing the RSs mentioned in the exclude file. > Better when the RegionMover user is NOT mentioning any exclude list, we can > exclude the draining/decommissioning RSs > Admin#listDecommissionedRegionServers() -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] anoopsjohn commented on a change in pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
anoopsjohn commented on a change in pull request #1955: URL: https://github.com/apache/hbase/pull/1955#discussion_r443967906 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java ## @@ -191,50 +186,22 @@ public boolean keepRegionEvent(Entry entry) { return false; } + /** + * @return Returns a base HFile without compressions or encodings; good enough for recovery Review comment: In Jira, I added a comment about making sure we will compact all these tiny HFiles created as part of WAL split. If we can make sure that part, I would say it ok to create these tiny files with out any table specific things like compression/DBE etc. Anyways we know all these files are going to get compacted and rewritten once we open the region. As of now we are not sure whether or when these tiny files will get compacted. In such case I would +1 ur ask. Do this HFile create with defaults as a fall back only. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
Apache9 commented on a change in pull request #1955: URL: https://github.com/apache/hbase/pull/1955#discussion_r443946747 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java ## @@ -191,50 +186,22 @@ public boolean keepRegionEvent(Entry entry) { return false; } + /** + * @return Returns a base HFile without compressions or encodings; good enough for recovery Review comment: Should we try to get the TableDescriptor first? If it is not possible, then we fallback to write generic HFiles. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on a change in pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
infraio commented on a change in pull request #1955: URL: https://github.com/apache/hbase/pull/1955#discussion_r443942436 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java ## @@ -191,50 +186,22 @@ public boolean keepRegionEvent(Entry entry) { return false; } + /** + * @return Returns a base HFile without compressions or encodings; good enough for recovery + * given hfile has metadata on how it was written. + */ private StoreFileWriter createRecoveredHFileWriter(TableName tableName, String regionName, long seqId, String familyName, boolean isMetaTable) throws IOException { Path outputDir = WALSplitUtil.tryCreateRecoveredHFilesDir(walSplitter.rootFS, walSplitter.conf, tableName, regionName, familyName); StoreFileWriter.Builder writerBuilder = new StoreFileWriter.Builder(walSplitter.conf, CacheConfig.DISABLED, walSplitter.rootFS) .withOutputDir(outputDir); - -TableDescriptor tableDesc = -tableDescCache.computeIfAbsent(tableName, t -> getTableDescriptor(t)); -if (tableDesc == null) { - throw new IOException("Failed to get table descriptor for table " + tableName); -} -ColumnFamilyDescriptor cfd = tableDesc.getColumnFamily(Bytes.toBytesBinary(familyName)); -HFileContext hFileContext = createFileContext(cfd, isMetaTable); -return writerBuilder.withFileContext(hFileContext).withBloomType(cfd.getBloomFilterType()) -.build(); - } - - private HFileContext createFileContext(ColumnFamilyDescriptor cfd, boolean isMetaTable) - throws IOException { -return new HFileContextBuilder().withCompression(cfd.getCompressionType()) -.withChecksumType(HStore.getChecksumType(walSplitter.conf)) -.withBytesPerCheckSum(HStore.getBytesPerChecksum(walSplitter.conf)) - .withBlockSize(cfd.getBlocksize()).withCompressTags(cfd.isCompressTags()) -.withDataBlockEncoding(cfd.getDataBlockEncoding()).withCellComparator( - isMetaTable ? CellComparatorImpl.META_COMPARATOR : CellComparatorImpl.COMPARATOR) -.build(); - } - - private TableDescriptor getTableDescriptor(TableName tableName) { -if (walSplitter.rsServices != null) { - try { -return walSplitter.rsServices.getConnection().getAdmin().getDescriptor(tableName); - } catch (IOException e) { -LOG.warn("Failed to get table descriptor for {}", tableName, e); - } -} -LOG.info("Failed getting {} table descriptor from master; trying local", tableName); -try { - return walSplitter.tableDescriptors.get(tableName); Review comment: The tableDescriptors may be removed from WalSplitter, too? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on a change in pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
infraio commented on a change in pull request #1955: URL: https://github.com/apache/hbase/pull/1955#discussion_r443942069 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/RSProcedureHandler.java ## @@ -48,7 +48,7 @@ public void process() { try { callable.call(); } catch (Throwable t) { - LOG.error("Error when call RSProcedureCallable: ", t); + LOG.error("pid=" + this.procId, t); Review comment: This log is too little? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] infraio commented on a change in pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
infraio commented on a change in pull request #1955: URL: https://github.com/apache/hbase/pull/1955#discussion_r443941820 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java ## @@ -2037,8 +2037,7 @@ private void startServices() throws IOException { this.splitLogWorker = new SplitLogWorker(sinkConf, this, this, walFactory); splitLogWorker.start(); -} else { - LOG.warn("SplitLogWorker Service NOT started; CoordinatedStateManager is null"); + LOG.debug("SplitLogWorker started"); Review comment: Move the log out of the if {} code block? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24585) Failed start recovering crash in standalone mode if procedure-based distributed WAL split & hbase.wal.split.to.hfile=true
[ https://issues.apache.org/jira/browse/HBASE-24585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142590#comment-17142590 ] Michael Stack commented on HBASE-24585: --- HBASE-24616 has suggested fix for BoundedRecoveredHFilesOutputSink not being able to get TableDescriptor at all times. > Failed start recovering crash in standalone mode if procedure-based > distributed WAL split & hbase.wal.split.to.hfile=true > - > > Key: HBASE-24585 > URL: https://issues.apache.org/jira/browse/HBASE-24585 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > (This description got redone after I figured out what was going on. > Previously it was just a litany of me banging around trying to learn > procedure-based WAL splitting and hbase.wal.split.to.hfile; no one needs to > read that; hence the refactor). > HBASE-24574 procedure-based distributed WAL splitting is enabled and > split-to-hflie too. A force crash requires recovery with ServerCrashProcedure > splitting old WALs on restart. The recovery fails because we get stuck. The > Master can't assign meta because it is being recovered. The recovery can't > make progress because it is asking for a table descriptor for meta -- needed > by the hbase.wal.split.to.hfile feature -- and the master is not yet > initialized. After the default timeout, Master shuts down because it can't > initialize. > {code} > 2020-06-18 19:53:54,175 ERROR [main] master.HMasterCommandLine: Master > exiting > java.lang.RuntimeException: Master not initialized after 20ms >at > org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:232) >at > org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:200) >at > org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:430) >at > org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:232) >at > org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) >at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) >at > org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) >at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3059) > {code} > The abort of Master interrupts other ongoing actions so later in the log > we'll see the WAL split show as interrupted > {code} > 2020-06-17 21:20:37,472 ERROR > [RS_LOG_REPLAY_OPS-regionserver/localhost:16020-0] > handler.RSProcedureHandler: Error when call RSProcedureCallable: > java.io.IOException: Failed WAL split, status=RESIGNED, > wal=file:/Users/stack/checkouts/hbase.apache.git/tmp/hbase/WALs/localhost,16020,1592440848604-splitting/localhost%2C16020%2C1592440848604.meta.1592440852959.meta >at > org.apache.hadoop.hbase.regionserver.SplitWALCallable.splitWal(SplitWALCallable.java:106) >at > org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:86) >at > org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:49) >at > org.apache.hadoop.hbase.regionserver.handler.RSProcedureHandler.process(RSProcedureHandler.java:49) >at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104) >at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) >at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) >at java.lang.Thread.run(Thread.java:748) > {code} > This issue becomes how to make hbase.wal.split.to.hfile work in standalone > mode. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23055) Alter hbase:meta
[ https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-23055. --- Resolution: Fixed Pushed addendum on branch-2.3+ > Alter hbase:meta > > > Key: HBASE-23055 > URL: https://issues.apache.org/jira/browse/HBASE-23055 > Project: HBase > Issue Type: Task > Components: meta >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > hbase:meta is currently hardcoded. Its schema cannot be change. > This issue is about allowing edits to hbase:meta schema. It will allow our > being able to set encodings such as the block-with-indexes which will help > quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first > step on road to being able to split meta. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack merged pull request #1956: HBASE-23055 Alter hbase:meta
saintstack merged pull request #1956: URL: https://github.com/apache/hbase/pull/1956 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24595) hbase create namespace blocked when all datanodes has restarted
[ https://issues.apache.org/jira/browse/HBASE-24595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142587#comment-17142587 ] Duo Zhang commented on HBASE-24595: --- It's HBASE-22681 HBASE-22684, but seems we already have these patches in 2.1.6... > hbase create namespace blocked when all datanodes has restarted > --- > > Key: HBASE-24595 > URL: https://issues.apache.org/jira/browse/HBASE-24595 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.6 >Reporter: Yu Wang >Priority: Critical > Attachments: create_namespace_1.png, create_namespace_2.png, > hmaster.log, hmaster.png, hmaster_4569.jstack, hregionserver.log, > hregionserver_25649.jstack, procedure.png > > > environment: > jdk:1.8.0_181 > hadoop: 3.1.1 > hbase: 2.1.6 > hbase shell create namespace blocked when all datanodes has restarted > in kerberos environment, > but create it successfully without kerberos > > hmaster日志中显示: > 2020-06-19 23:47:48,241 WARN [PEWorker-15] > procedure.CreateNamespaceProcedure: Retriable error trying to create > namespace=abcd2 (in state=CREATE_NAMESPACE_INSERT_INTO_NS_TABLE) > java.net.SocketTimeoutException: callTimeout=120, callDuration=1220061: > Call to hadoop-hbnn0005.com/172.20.101.36:16020 failed on local exception: > org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=116, > waitTime=10763, rpcTimeout=10759 row 'abcd2' on table 'hbase:namespace' at > region=hbase:namespace,,1592548148073.f5c7e71fb5e5cab3b27e52600996f7fd., > hostname=hadoop-hbnn0005.com,16020,1592580274989, seqNum=162 > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:159) > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:542) > at > org.apache.hadoop.hbase.master.TableNamespaceManager.insertIntoNSTable(TableNamespaceManager.java:167) > at > org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.insertIntoNSTable(CreateNamespaceProcedure.java:240) > at > org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.executeFromState(CreateNamespaceProcedure.java:85) > at > org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.executeFromState(CreateNamespaceProcedure.java:39) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) > Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to > hadoop-hbnn0005.com/172.20.101.36:16020 failed on local exception: > org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=116, > waitTime=10763, rpcTimeout=10759 > at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:205) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) > at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96) > at > org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199) > at > org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:682) > at > org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:757) > at > org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:485) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=116, > waitTime=10763, rpcTimeout=10759 > at > org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200) > ... 4 more > 2020-06-19 23:47:49,218 WARN [ProcExecTimeout] procedure2.ProcedureExecutor: > Worker stuck PEWorker-15(pid=171), run time 20mins, 1.262sec > 2020-06-19 23:47:54,220 WARN [ProcExecTimeout] procedure2.ProcedureExecutor: > Worker stuck PEWorker-15(pid=171), run time 20mins, 6.263sec > 2020-06-19 23:47:59,220 WARN [ProcExecTimeout] procedure2.ProcedureExecutor: > Worker stuck
[jira] [Commented] (HBASE-24595) hbase create namespace blocked when all datanodes has restarted
[ https://issues.apache.org/jira/browse/HBASE-24595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142563#comment-17142563 ] Yu Wang commented on HBASE-24595: - Thanks for your answer [~anoop.hbase] I think this scenes is different from HBASE-21564, Seems this is relate to kerberos for working fine in without kerberos environment, but i can't find any kerberos info in master and regionserver log. Doubt that regionserver is interact with kerberos where has problem. > hbase create namespace blocked when all datanodes has restarted > --- > > Key: HBASE-24595 > URL: https://issues.apache.org/jira/browse/HBASE-24595 > Project: HBase > Issue Type: Bug >Affects Versions: 2.1.6 >Reporter: Yu Wang >Priority: Critical > Attachments: create_namespace_1.png, create_namespace_2.png, > hmaster.log, hmaster.png, hmaster_4569.jstack, hregionserver.log, > hregionserver_25649.jstack, procedure.png > > > environment: > jdk:1.8.0_181 > hadoop: 3.1.1 > hbase: 2.1.6 > hbase shell create namespace blocked when all datanodes has restarted > in kerberos environment, > but create it successfully without kerberos > > hmaster日志中显示: > 2020-06-19 23:47:48,241 WARN [PEWorker-15] > procedure.CreateNamespaceProcedure: Retriable error trying to create > namespace=abcd2 (in state=CREATE_NAMESPACE_INSERT_INTO_NS_TABLE) > java.net.SocketTimeoutException: callTimeout=120, callDuration=1220061: > Call to hadoop-hbnn0005.com/172.20.101.36:16020 failed on local exception: > org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=116, > waitTime=10763, rpcTimeout=10759 row 'abcd2' on table 'hbase:namespace' at > region=hbase:namespace,,1592548148073.f5c7e71fb5e5cab3b27e52600996f7fd., > hostname=hadoop-hbnn0005.com,16020,1592580274989, seqNum=162 > at > org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:159) > at org.apache.hadoop.hbase.client.HTable.put(HTable.java:542) > at > org.apache.hadoop.hbase.master.TableNamespaceManager.insertIntoNSTable(TableNamespaceManager.java:167) > at > org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.insertIntoNSTable(CreateNamespaceProcedure.java:240) > at > org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.executeFromState(CreateNamespaceProcedure.java:85) > at > org.apache.hadoop.hbase.master.procedure.CreateNamespaceProcedure.executeFromState(CreateNamespaceProcedure.java:39) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:189) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:965) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1723) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1462) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1200(ProcedureExecutor.java:78) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2039) > Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call to > hadoop-hbnn0005.com/172.20.101.36:16020 failed on local exception: > org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=116, > waitTime=10763, rpcTimeout=10759 > at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:205) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406) > at org.apache.hadoop.hbase.ipc.Call.setTimeout(Call.java:96) > at > org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:199) > at > org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:682) > at > org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:757) > at > org.apache.hbase.thirdparty.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:485) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=116, > waitTime=10763, rpcTimeout=10759 > at > org.apache.hadoop.hbase.ipc.RpcConnection$1.run(RpcConnection.java:200) > ... 4 more > 2020-06-19 23:47:49,218 WARN [ProcExecTimeout] procedure2.ProcedureExecutor: > Worker stuck PEWorker-15(pid=171), run time 20mins, 1.262sec > 2020-06-19
[jira] [Commented] (HBASE-24102) RegionMover should exclude draining/decommissioning nodes from target RSs
[ https://issues.apache.org/jira/browse/HBASE-24102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142568#comment-17142568 ] Hudson commented on HBASE-24102: Results for branch branch-2 [build #2715 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2715/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2715/General_20Nightly_20Build_20Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2715/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2715/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2715/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should exclude draining/decommissioning nodes from target RSs > - > > Key: HBASE-24102 > URL: https://issues.apache.org/jira/browse/HBASE-24102 > Project: HBase > Issue Type: Improvement >Reporter: Anoop Sam John >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.1.10, 2.2.5 > > > When using RegionMover tool to unload the regions from a given RS, it decides > the list of destination RSs by > {code} > List regionServers = new ArrayList<>(); > regionServers.addAll(admin.getRegionServers()); > // Remove the host Region server from target Region Servers list > ServerName server = stripServer(regionServers, hostname, port); > . > // Remove RS present in the exclude file > stripExcludes(regionServers); > stripMaster(regionServers); > {code} > Ya it is removing the RSs mentioned in the exclude file. > Better when the RegionMover user is NOT mentioning any exclude list, we can > exclude the draining/decommissioning RSs > Admin#listDecommissionedRegionServers() -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bsglz commented on pull request #1883: HBASE-24530 Introduce a split policy similar with SteppingSplitPolicy…
bsglz commented on pull request #1883: URL: https://github.com/apache/hbase/pull/1883#issuecomment-647871195 @wchevreuil @virajjasani Skimed the comments of the thread, seems the #3 is mostly supported, should we continue discussion or do as #3? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bsglz commented on pull request #1926: HBASE-24586 Add table level locality in table.jsp
bsglz commented on pull request #1926: URL: https://github.com/apache/hbase/pull/1926#issuecomment-647869966 @wchevreuil @virajjasani Could you help to review this one, thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-24605) Break long region names in the web UI
[ https://issues.apache.org/jira/browse/HBASE-24605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guangxu Cheng resolved HBASE-24605. --- Fix Version/s: 2.2.6 2.4.0 2.3.1 3.0.0-alpha-1 Resolution: Fixed Pushed to branch-2.2+, Thanks for your contributing.[~songxincun] > Break long region names in the web UI > - > > Key: HBASE-24605 > URL: https://issues.apache.org/jira/browse/HBASE-24605 > Project: HBase > Issue Type: Improvement > Components: UI >Affects Versions: 3.0.0-alpha-1 >Reporter: song XinCun >Assignee: song XinCun >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0, 2.2.6 > > Attachments: image-2020-06-21-20-18-37-041.png, > image-2020-06-21-20-19-25-183.png, image-2020-06-21-20-20-02-782.png, > image-2020-06-21-20-27-23-474.png, image-2020-06-21-20-28-36-464.png, > image-2020-06-21-20-29-07-819.png > > > Before this patch, when it comes to the long region name, the UI content will > be out of the screen, making it unreadable. Like this: > !image-2020-06-21-20-18-37-041.png|width=542,height=50! > !image-2020-06-21-20-19-25-183.png|width=531,height=23! > !image-2020-06-21-20-20-02-782.png|width=542,height=146! > > After this patch, the long region name wil be break to the new line, like > this: > !image-2020-06-21-20-27-23-474.png|width=529,height=35! > !image-2020-06-21-20-28-36-464.png|width=533,height=33! > !image-2020-06-21-20-29-07-819.png|width=531,height=117! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24615) MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the distribution for last bucket.
[ https://issues.apache.org/jira/browse/HBASE-24615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wenfeiyi666 reassigned HBASE-24615: --- Assignee: wenfeiyi666 > MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the > distribution for last bucket. > > > Key: HBASE-24615 > URL: https://issues.apache.org/jira/browse/HBASE-24615 > Project: HBase > Issue Type: Bug > Components: metrics >Affects Versions: 1.3.7 >Reporter: Rushabh Shah >Assignee: wenfeiyi666 >Priority: Major > > We are not processing the distribution for last bucket. > https://github.com/apache/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java#L70 > {code:java} > public void updateSnapshotRangeMetrics(MetricsRecordBuilder > metricsRecordBuilder, > Snapshot snapshot) { > long priorRange = 0; > long cumNum = 0; > final long[] ranges = getRanges(); > final String rangeType = getRangeType(); > for (int i = 0; i < ranges.length - 1; i++) { -> The bug lies > here. We are not processing last bucket. > long val = snapshot.getCountAtOrBelow(ranges[i]); > if (val - cumNum > 0) { > metricsRecordBuilder.addCounter( > Interns.info(name + "_" + rangeType + "_" + priorRange + "-" + > ranges[i], desc), > val - cumNum); > } > priorRange = ranges[i]; > cumNum = val; > } > long val = snapshot.getCount(); > if (val - cumNum > 0) { > metricsRecordBuilder.addCounter( > Interns.info(name + "_" + rangeType + "_" + ranges[ranges.length - > 1] + "-inf", desc), > val - cumNum); > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] guangxuCheng merged pull request #1942: HBASE-24605 Break long region names in the web UI
guangxuCheng merged pull request #1942: URL: https://github.com/apache/hbase/pull/1942 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-22504) Optimize the MultiByteBuff#get(ByteBuffer, offset, len)
[ https://issues.apache.org/jira/browse/HBASE-22504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142525#comment-17142525 ] Zheng Hu commented on HBASE-22504: -- [~ndimiduk], Well, when i write this patch , I checked there's no other class depending on the findCommonPrefix, while it's a public method. removing it will indeed introduce the compatibility issues. Let me restore it. Thanks. > Optimize the MultiByteBuff#get(ByteBuffer, offset, len) > --- > > Key: HBASE-22504 > URL: https://issues.apache.org/jira/browse/HBASE-22504 > Project: HBase > Issue Type: Sub-task > Components: BucketCache >Reporter: Zheng Hu >Assignee: Zheng Hu >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: HBASE-22504.HBASE-21879.v01.patch > > > In HBASE-22483, we saw that the BucketCacheWriter thread was quite busy > [^BucketCacheWriter-is-busy.png], the flame graph also indicated that the > ByteBufferArray#internalTransfer cost ~6% CPU (see > [async-prof-pid-25042-cpu-1.svg|https://issues.apache.org/jira/secure/attachment/12970294/async-prof-pid-25042-cpu-1.svg]). > because we used the hbase.ipc.server.allocator.buffer.size=64KB, each > HFileBlock will be backend by a MultiByteBuff: one 64KB offheap ByteBuffer > and one small heap ByteBuffer. > The path is depending on the MultiByteBuff#get(ByteBuffer, offset, len) now: > {code:java} > RAMQueueEntry#writeToCache > |--> ByteBufferIOEngine#write > |--> ByteBufferArray#internalTransfer > |--> ByteBufferArray$WRITER > |--> MultiByteBuff#get(ByteBuffer, offset, len) > {code} > While the MultiByteBuff#get impl is simple and crude now, can optimze this > implementation: > {code:java} > @Override > public void get(ByteBuffer out, int sourceOffset, > int length) { > checkRefCount(); > // Not used from real read path actually. So not going with > // optimization > for (int i = 0; i < length; ++i) { > out.put(this.get(sourceOffset + i)); > } > } > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24618) Backport HBASE-21204 to branch-1
[ https://issues.apache.org/jira/browse/HBASE-24618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhishek Singh Chouhan updated HBASE-24618: --- Fix Version/s: 1.7.0 > Backport HBASE-21204 to branch-1 > > > Key: HBASE-24618 > URL: https://issues.apache.org/jira/browse/HBASE-24618 > Project: HBase > Issue Type: Improvement >Reporter: Abhishek Singh Chouhan >Assignee: Abhishek Singh Chouhan >Priority: Major > Fix For: 1.7.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24618) Backport HBASE-21204 to branch-1
Abhishek Singh Chouhan created HBASE-24618: -- Summary: Backport HBASE-21204 to branch-1 Key: HBASE-24618 URL: https://issues.apache.org/jira/browse/HBASE-24618 Project: HBase Issue Type: Improvement Reporter: Abhishek Singh Chouhan Assignee: Abhishek Singh Chouhan -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1956: HBASE-23055 Alter hbase:meta
Apache-HBase commented on pull request #1956: URL: https://github.com/apache/hbase/pull/1956#issuecomment-647849808 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 36s | branch-2 passed | | +1 :green_heart: | checkstyle | 0m 25s | branch-2 passed | | +1 :green_heart: | spotbugs | 0m 44s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 11s | the patch passed | | +1 :green_heart: | checkstyle | 0m 23s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 23s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 0m 50s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 28m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1956 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux e75e5cfe9a89 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-common U: hbase-common | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1956: HBASE-23055 Alter hbase:meta
Apache-HBase commented on pull request #1956: URL: https://github.com/apache/hbase/pull/1956#issuecomment-647849971 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 5s | branch-2 passed | | +1 :green_heart: | compile | 0m 28s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 40s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 21s | hbase-common in branch-2 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 37s | the patch passed | | +1 :green_heart: | compile | 0m 26s | the patch passed | | +1 :green_heart: | javac | 0m 26s | the patch passed | | +1 :green_heart: | shadedjars | 6m 39s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-common in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 47s | hbase-common in the patch passed. | | | | 29m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1956 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6dc7209c7d73 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/testReport/ | | Max. process+thread count | 278 (vs. ulimit of 12500) | | modules | C: hbase-common U: hbase-common | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1956: HBASE-23055 Alter hbase:meta
Apache-HBase commented on pull request #1956: URL: https://github.com/apache/hbase/pull/1956#issuecomment-647848873 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 25s | branch-2 passed | | +1 :green_heart: | compile | 0m 24s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 34s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 22s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 48s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | shadedjars | 5m 36s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 27s | hbase-common in the patch passed. | | | | 25m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1956 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6a5264fcf2c8 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/testReport/ | | Max. process+thread count | 257 (vs. ulimit of 12500) | | modules | C: hbase-common U: hbase-common | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1956/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
Apache-HBase commented on pull request #1955: URL: https://github.com/apache/hbase/pull/1955#issuecomment-647845364 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 8s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 55s | branch-2 passed | | +1 :green_heart: | spotbugs | 3m 23s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | -0 :warning: | checkstyle | 1m 16s | hbase-server: The patch generated 1 new + 63 unchanged - 0 fixed = 64 total (was 63) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 6s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 4m 30s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 26s | The patch does not generate ASF License warnings. | | | | 44m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1955/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1955 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 4eb894d8e3d6 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1955/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1955/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24617) Enable injecting build start time value as part of build
Matthew Foley created HBASE-24617: - Summary: Enable injecting build start time value as part of build Key: HBASE-24617 URL: https://issues.apache.org/jira/browse/HBASE-24617 Project: HBase Issue Type: Improvement Components: create-release, UI Affects Versions: 3.0.0-alpha-1, 2.3.0 Reporter: Matthew Foley Assignee: Matthew Foley The HBase build's creation time is presented in the HBase UI, and made available through Java, via the {{org.apache.hadoop.hbase.Version}} class's {{date}} value, which is generated at build time by {{hbase-common/src/saveVersion.sh}}. The script just invokes the shell command {{date}} and captures its result as a string. The problem is, this occurs every time hbase-common is built. And, for good and sufficient reason, when making a release via dev-support/create-release, the task for building and deploying hbase jars as maven libraries and the task for building binary release artifacts as tarballs, EACH do a {{clean}} build. Thus, the build time found in the libs is different from the build time found in the release tarballs. There is value in keeping the two tasks independent, and able to run fully each by themselves. And there is value in doing a {{clean}} at the start of such processes, to make sure you're releasing binaries that exactly match the source code. So to keep these benefits, but enable the start time to be determined once and used for a couple builds in a row in a given environment, I propose to allow injecting the desired value. Specifically, I want to change saveVersion.sh to look for an existing value of env var HBASE_BUILD_TIME, and if it exists use it instead of calling {{date}}. One would of course set it as part of the build process (in create-release) and clear this value by unsetting the environment variable when done with the build. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23055) Alter hbase:meta
[ https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142504#comment-17142504 ] Michael Stack commented on HBASE-23055: --- I put up an addendum > Alter hbase:meta > > > Key: HBASE-23055 > URL: https://issues.apache.org/jira/browse/HBASE-23055 > Project: HBase > Issue Type: Task > Components: meta >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > hbase:meta is currently hardcoded. Its schema cannot be change. > This issue is about allowing edits to hbase:meta schema. It will allow our > being able to set encodings such as the block-with-indexes which will help > quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first > step on road to being able to split meta. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack opened a new pull request #1956: HBASE-23055 Alter hbase:meta
saintstack opened a new pull request #1956: URL: https://github.com/apache/hbase/pull/1956 Addendum to fix illegal removal of unused constant w/o a deprecation cycle. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] saintstack opened a new pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…
saintstack opened a new pull request #1955: URL: https://github.com/apache/hbase/pull/1955 …ableDescriptor Logging cleanup. hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java Undo fetching Table Descriptor. Not reliably available at recovery time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-native-client] phrocker commented on pull request #6: HBASE-23105: Download lib double conversion, fizz, update folly
phrocker commented on pull request #6: URL: https://github.com/apache/hbase-native-client/pull/6#issuecomment-647831826 @bharathv Thanks. making some changes now to download double conversion and gsasl2. will mark as ready for review after. thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-23739) BoundedRecoveredHFilesOutputSink should read the table descriptor directly
[ https://issues.apache.org/jira/browse/HBASE-23739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142494#comment-17142494 ] Michael Stack commented on HBASE-23739: --- bq. Sir, can you explain more about this? [~zghao] Idea was that even though the Master was not yet initialized -- i.e. was starting up -- if a request came in for a table descriptor, then we'd answer it if we could rather than return a PleaseHoldException. I tried it and it didn't work fully for the standalone scenario I am trying to solve for. So, I went back to the idea of not asking for a TableDescriptor at all and just writing bare hfiles which should work given they carry the meta data for Readers to figure who the hfile was written. Let me put up a patch in a new JIRA HBASE-24616 > BoundedRecoveredHFilesOutputSink should read the table descriptor directly > -- > > Key: HBASE-23739 > URL: https://issues.apache.org/jira/browse/HBASE-23739 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > Read from meta or filesystem? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24616) Remove BoundedRecoveredHFilesOutputSink dependency on a TableDescriptor
Michael Stack created HBASE-24616: - Summary: Remove BoundedRecoveredHFilesOutputSink dependency on a TableDescriptor Key: HBASE-24616 URL: https://issues.apache.org/jira/browse/HBASE-24616 Project: HBase Issue Type: Bug Components: HFile, MTTR Reporter: Michael Stack BoundedRecoveredHFilesOutputSink wants to read TableDescriptor so it writes the particular hfile format specified by a table's schema. Getting the table schema can be tough at various points of operation especially around startup. HBASE-23739 tried to read from the fs if unable to read TableDescriptor from Master. This approach works generally but fails in standalone mode as in standalone mode we will have given-up our start up attempt BEFORE the request to Master for TableDescriptor times out (the read from fs is never attempted). The suggested patch here does away w/ reading TableDescriptor and just has BoundedRecoveredHFilesOutputSink write generic hfiles. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
Apache-HBase commented on pull request #1933: URL: https://github.com/apache/hbase/pull/1933#issuecomment-647827260 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 28s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 34s | master passed | | +1 :green_heart: | compile | 1m 33s | master passed | | +1 :green_heart: | shadedjars | 6m 30s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 10s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 16s | the patch passed | | +1 :green_heart: | compile | 1m 32s | the patch passed | | +1 :green_heart: | javac | 1m 32s | the patch passed | | +1 :green_heart: | shadedjars | 6m 51s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hbase-client in the patch passed. | | -1 :x: | unit | 228m 44s | hbase-server in the patch failed. | | | | 262m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1933 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux b193ba3a6ce1 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/testReport/ | | Max. process+thread count | 2764 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk closed pull request #1832: HBASE-24493 WIP Increase hadoop logging
ndimiduk closed pull request #1832: URL: https://github.com/apache/hbase/pull/1832 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-15161) Umbrella: Miscellaneous improvements from production usage
[ https://issues.apache.org/jira/browse/HBASE-15161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-15161: - Affects Version/s: (was: 2.3.0) (was: 3.0.0-alpha-1) > Umbrella: Miscellaneous improvements from production usage > -- > > Key: HBASE-15161 > URL: https://issues.apache.org/jira/browse/HBASE-15161 > Project: HBase > Issue Type: Improvement >Reporter: Yu Li >Assignee: Yu Li >Priority: Major > > We use HBase to (mainly) build index for our search engine in Alibaba. > Recently we are upgrading our online cluster from 0.98.12 to 1.x and I'd like > to take the opportunity to contribute a bunch of our private patches to > community (better late than never, I hope :-)). This is an umbrella to track > this effort. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-15161) Umbrella: Miscellaneous improvements from production usage
[ https://issues.apache.org/jira/browse/HBASE-15161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk resolved HBASE-15161. -- Fix Version/s: 2.3.0 3.0.0-alpha-1 Resolution: Fixed > Umbrella: Miscellaneous improvements from production usage > -- > > Key: HBASE-15161 > URL: https://issues.apache.org/jira/browse/HBASE-15161 > Project: HBase > Issue Type: Improvement >Reporter: Yu Li >Assignee: Yu Li >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > We use HBase to (mainly) build index for our search engine in Alibaba. > Recently we are upgrading our online cluster from 0.98.12 to 1.x and I'd like > to take the opportunity to contribute a bunch of our private patches to > community (better late than never, I hope :-)). This is an umbrella to track > this effort. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-15161) Umbrella: Miscellaneous improvements from production usage
[ https://issues.apache.org/jira/browse/HBASE-15161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-15161: - Release Note: This ticket summarizes significant improvements and expansion to the metrics surface area. Interested users should review the individual sub-tasks. Affects Version/s: 2.3.0 3.0.0-alpha-1 Adding fixVersion and a release note. Please adjust as required. > Umbrella: Miscellaneous improvements from production usage > -- > > Key: HBASE-15161 > URL: https://issues.apache.org/jira/browse/HBASE-15161 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0-alpha-1, 2.3.0 >Reporter: Yu Li >Assignee: Yu Li >Priority: Major > > We use HBase to (mainly) build index for our search engine in Alibaba. > Recently we are upgrading our online cluster from 0.98.12 to 1.x and I'd like > to take the opportunity to contribute a bunch of our private patches to > community (better late than never, I hope :-)). This is an umbrella to track > this effort. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-15595) Document new metrics improvements
[ https://issues.apache.org/jira/browse/HBASE-15595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142486#comment-17142486 ] Nick Dimiduk commented on HBASE-15595: -- So what do we want to mention exactly? Looking at [the book|https://hbase.apache.org/book.html#hbase_metrics], we don't attempt to make an exhaustive list. Rather, we highlight the "most important" metrics for master, region server, and for meta table. > Document new metrics improvements > - > > Key: HBASE-15595 > URL: https://issues.apache.org/jira/browse/HBASE-15595 > Project: HBase > Issue Type: Bug >Reporter: Enis Soztutar >Priority: Major > > We should document the improvements from the parent jira when we are done. > Per-table, per-user metrics, dfs metrics, etc. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on a change in pull request #1922: HBASE-24583 Normalizer can't actually merge empty regions when neighbor is larger than average size
ndimiduk commented on a change in pull request #1922: URL: https://github.com/apache/hbase/pull/1922#discussion_r443876595 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.java ## @@ -369,7 +369,8 @@ private boolean skipForMerge(final RegionStates regionStates, final RegionInfo r } final long currentSizeMb = getRegionSizeMB(current); final long nextSizeMb = getRegionSizeMB(next); - if (currentSizeMb + nextSizeMb < avgRegionSizeMb) { + // always merge away empty regions when they present themselves. + if (currentSizeMb == 0 || nextSizeMb == 0 || currentSizeMb + nextSizeMb < avgRegionSizeMb) { Review comment: I like this idea of a fuzzy threshold idea. What if we merge a little more aggressively, expressed relative to `avgRegionSizeMb`? Something like ``` if (currentSizeMb + nextSizeMb < avgRegionSizeMb * 0.4) {...} ``` This gives us a strong preference toward larger regions, with a threshold based on the average size. I guess next you'll same "make it configurable" :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
huaxiangsun commented on a change in pull request #1933: URL: https://github.com/apache/hbase/pull/1933#discussion_r443875648 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1957,17 +1959,27 @@ public boolean normalizeRegions() throws IOException { continue; } - // as of this writing, `plan.execute()` is non-blocking, so there's no artificial rate- - // limiting of merge requests due to this serial loop. + // as of this writing, `plan.submit()` is non-blocking and uses Async Admin APIs to + // submit task , so there's no artificial rate- + // limiting of merge/split requests due to this serial loop. for (NormalizationPlan plan : plans) { -plan.execute(admin); +Future future = plan.submit(admin); +submittedPlanList.add(future); if (plan.getType() == PlanType.SPLIT) { splitPlanCount++; } else if (plan.getType() == PlanType.MERGE) { mergePlanCount++; } } } +for (Future submittedPlan : submittedPlanList) { + try { +submittedPlan.get(); Review comment: As Nick commented, need to think about the purpose of this change. I think the purpose is to know the result of plans, which comes with cost. 1). Timeout with Future.get(). When it times out, we do not know if plan succeeds or not. The only info it gives us is that the plan does not finish within a certain amount of time. 2). Perform all.get() asynchronously. If get() blocks for whatever reason, there will be huge number of threads blocking for get(), system resource leak. Maybe the best approach is per Nick's comments, log how many plans submitted. If some plans fail, we can go to procedure system for root cause. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1924: HBASE-24552 Replica region needs to check if primary region directory…
Apache-HBase commented on pull request #1924: URL: https://github.com/apache/hbase/pull/1924#issuecomment-647813718 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 50s | master passed | | +1 :green_heart: | compile | 1m 12s | master passed | | +1 :green_heart: | shadedjars | 7m 56s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 44s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 42s | the patch passed | | +1 :green_heart: | compile | 1m 11s | the patch passed | | +1 :green_heart: | javac | 1m 11s | the patch passed | | +1 :green_heart: | shadedjars | 7m 24s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 42s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 143m 25s | hbase-server in the patch passed. | | | | 175m 4s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1924 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d7cdf0951aad 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/testReport/ | | Max. process+thread count | 3884 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #1922: HBASE-24583 Normalizer can't actually merge empty regions when neighbor is larger than average size
ndimiduk commented on a change in pull request #1922: URL: https://github.com/apache/hbase/pull/1922#discussion_r443873027 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/MergeNormalizationPlan.java ## @@ -78,4 +82,30 @@ public void execute(Admin admin) { LOG.error("Error during region merge: ", ex); } } + + @Override + public boolean equals(Object o) { +if (this == o) { + return true; +} + +if (o == null || getClass() != o.getClass()) { + return false; +} + +MergeNormalizationPlan that = (MergeNormalizationPlan) o; + +return new EqualsBuilder() + .append(firstRegion, that.firstRegion) + .append(secondRegion, that.secondRegion) + .isEquals(); + } + + @Override + public int hashCode() { Review comment: Yeah, seems they're unused. As is `SplitNormalizationPlan#getRegionInfo()`. Seems a little strange to have a POJO without public accessors, to include these members in `toString` but not have accessors, but yeah, less data visibility is better data visibility. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #1922: HBASE-24583 Normalizer can't actually merge empty regions when neighbor is larger than average size
ndimiduk commented on a change in pull request #1922: URL: https://github.com/apache/hbase/pull/1922#discussion_r443871798 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/MergeNormalizationPlan.java ## @@ -78,4 +82,30 @@ public void execute(Admin admin) { LOG.error("Error during region merge: ", ex); } } + + @Override + public boolean equals(Object o) { +if (this == o) { + return true; +} + +if (o == null || getClass() != o.getClass()) { + return false; +} + +MergeNormalizationPlan that = (MergeNormalizationPlan) o; + +return new EqualsBuilder() + .append(firstRegion, that.firstRegion) + .append(secondRegion, that.secondRegion) + .isEquals(); + } + + @Override + public int hashCode() { +return new HashCodeBuilder(17, 37) Review comment: And make the object instances that much larger? Is that really helpful? I think of this as a premature optimization, something the JIT can handle for me if it thinks so. If you feel strongly about it, I suppose... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24102) RegionMover should exclude draining/decommissioning nodes from target RSs
[ https://issues.apache.org/jira/browse/HBASE-24102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142472#comment-17142472 ] Hudson commented on HBASE-24102: Results for branch branch-2.2 [build #900 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should exclude draining/decommissioning nodes from target RSs > - > > Key: HBASE-24102 > URL: https://issues.apache.org/jira/browse/HBASE-24102 > Project: HBase > Issue Type: Improvement >Reporter: Anoop Sam John >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.1.10, 2.2.5 > > > When using RegionMover tool to unload the regions from a given RS, it decides > the list of destination RSs by > {code} > List regionServers = new ArrayList<>(); > regionServers.addAll(admin.getRegionServers()); > // Remove the host Region server from target Region Servers list > ServerName server = stripServer(regionServers, hostname, port); > . > // Remove RS present in the exclude file > stripExcludes(regionServers); > stripMaster(regionServers); > {code} > Ya it is removing the RSs mentioned in the exclude file. > Better when the RegionMover user is NOT mentioning any exclude list, we can > exclude the draining/decommissioning RSs > Admin#listDecommissionedRegionServers() -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24205) Create metric to know the number of reads that happens from memstore
[ https://issues.apache.org/jira/browse/HBASE-24205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142471#comment-17142471 ] Hudson commented on HBASE-24205: Results for branch branch-2.2 [build #900 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/900//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Create metric to know the number of reads that happens from memstore > > > Key: HBASE-24205 > URL: https://issues.apache.org/jira/browse/HBASE-24205 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 3.0.0-alpha-1 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.2.6 > > Attachments: screenshot.png, screenshot_tablevscf.png > > > A metric to identify number of reads that were served from memstore (atleast > if the gets can be accounted for) then it gives a value addition to know if > among the reads how much was targeted at the most recent data. > Currently the existing metric framework at region level should be enough but > we can also add a metric per store level. That will be more granular. > We can also expose this via HbTop. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24603) Zookeeper sync() call is async
[ https://issues.apache.org/jira/browse/HBASE-24603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142469#comment-17142469 ] Bharath Vissapragada commented on HBASE-24603: -- Some tests have regressed with the sync() call never returning, which is weird since there shouldn't be many transactions in the test context for the followers to catchup. I'm trying to dig into the ZK code to understand it better to see if its a ZK bug. > Zookeeper sync() call is async > -- > > Key: HBASE-24603 > URL: https://issues.apache.org/jira/browse/HBASE-24603 > Project: HBase > Issue Type: Improvement > Components: master, regionserver >Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0 > > > Here is the method that does a sync() of lagging followers with leader in the > quorum. We rely on this to see a consistent snapshot of ZK data from multiple > clients. However the problem is that the underlying sync() call is actually > asynchronous since we are passing a 'null' call back. See the ZK API > [doc|https://zookeeper.apache.org/doc/r3.5.7/apidocs/zookeeper-server/index.html] > for details. The end-result is that sync() doesn't guarantee that it has > happened by the time it returns. > {noformat} > /** >* Forces a synchronization of this ZooKeeper client connection. >* >* Executing this method before running other methods will ensure that the >* subsequent operations are up-to-date and consistent as of the time that >* the sync is complete. >* >* This is used for compareAndSwap type operations where we need to read the >* data of an existing node and delete or transition that node, utilizing > the >* previously read version and data. We want to ensure that the version > read >* is up-to-date from when we begin the operation. >*/ > public void sync(String path) throws KeeperException { > this.recoverableZooKeeper.sync(path, null, null); > } > {noformat} > We rely on this heavily (at least in the older branches that do ZK based > region assignment). In branch-1 we saw weird "BadVersionException" exceptions > in RITs because of the inconsistent view of the ZK snapshot. It could > manifest differently in other branches. Either way, this is something we need > to fix. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1952: HBASE-24612: Consider allowing a separate EventLoopGroup for acceptin…
Apache-HBase commented on pull request #1952: URL: https://github.com/apache/hbase/pull/1952#issuecomment-647807928 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 48s | branch-2 passed | | +1 :green_heart: | compile | 1m 16s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 39s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 43s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 22s | the patch passed | | +1 :green_heart: | compile | 1m 12s | the patch passed | | +1 :green_heart: | javac | 1m 12s | the patch passed | | +1 :green_heart: | shadedjars | 6m 25s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 41s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 191m 48s | hbase-server in the patch passed. | | | | 221m 10s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1952 | | JIRA Issue | HBASE-24612 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9a541546cdf4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/testReport/ | | Max. process+thread count | 2557 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24611) Bring back old constructor of SnapshotDescription
[ https://issues.apache.org/jira/browse/HBASE-24611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142468#comment-17142468 ] Hudson commented on HBASE-24611: Results for branch branch-2 [build #2714 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2714/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2714/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2714/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2714/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2714/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bring back old constructor of SnapshotDescription > - > > Key: HBASE-24611 > URL: https://issues.apache.org/jira/browse/HBASE-24611 > Project: HBase > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > As part of HBASE-22648 (Snapshot TTL), one of SnapshotDescription constructor > was modified with an additional argument and hence, this is raising source > compatibility concerns for minor releases. We need to bring back old > constructor, mark it deprecated and internally point to new constructor with > null/empty snapshotProps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1924: HBASE-24552 Replica region needs to check if primary region directory…
Apache-HBase commented on pull request #1924: URL: https://github.com/apache/hbase/pull/1924#issuecomment-647806839 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 42s | master passed | | +1 :green_heart: | compile | 1m 12s | master passed | | +1 :green_heart: | shadedjars | 5m 47s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 43s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 5s | the patch passed | | +1 :green_heart: | compile | 1m 8s | the patch passed | | +1 :green_heart: | javac | 1m 8s | the patch passed | | +1 :green_heart: | shadedjars | 5m 47s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 41s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 125m 52s | hbase-server in the patch passed. | | | | 152m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1924 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux aa03a70738c6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/testReport/ | | Max. process+thread count | 4536 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24611) Bring back old constructor of SnapshotDescription
[ https://issues.apache.org/jira/browse/HBASE-24611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142465#comment-17142465 ] Hudson commented on HBASE-24611: Results for branch branch-2.3 [build #149 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/149/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/149/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/149/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/149/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/149/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Bring back old constructor of SnapshotDescription > - > > Key: HBASE-24611 > URL: https://issues.apache.org/jira/browse/HBASE-24611 > Project: HBase > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0 > > > As part of HBASE-22648 (Snapshot TTL), one of SnapshotDescription constructor > was modified with an additional argument and hence, this is raising source > compatibility concerns for minor releases. We need to bring back old > constructor, mark it deprecated and internally point to new constructor with > null/empty snapshotProps. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1954: HBASE-24231 Add hadoop 3.2.x in our support matrix
Apache-HBase commented on pull request #1954: URL: https://github.com/apache/hbase/pull/1954#issuecomment-647804434 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 33s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 54s | master passed | | +0 :ok: | refguide | 6m 21s | branch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 56s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +0 :ok: | refguide | 6m 44s | patch has no errors when building the reference guide. See footer for rendered docs, which you should manually inspect. | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 16s | The patch does not generate ASF License warnings. | | | | 24m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1954 | | Optional Tests | dupname asflicense refguide | | uname | Linux 47a8db67c6e4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/artifact/yetus-general-check/output/branch-site/book.html | | refguide | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/artifact/yetus-general-check/output/patch-site/book.html | | Max. process+thread count | 65 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24446) Use EnvironmentEdgeManager to compute clock skew in Master
[ https://issues.apache.org/jira/browse/HBASE-24446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142462#comment-17142462 ] Nick Dimiduk commented on HBASE-24446: -- Sorry for the delay [~vjasani]. I review https://github.com/apache/hbase/pull/1885, +1 for this on branch-2.3. I think it's safe/reasonable for backport to all active branch-2 lines. > Use EnvironmentEdgeManager to compute clock skew in Master > -- > > Key: HBASE-24446 > URL: https://issues.apache.org/jira/browse/HBASE-24446 > Project: HBase > Issue Type: Bug >Affects Versions: 1.6.0 >Reporter: Sandeep Guggilam >Assignee: Sandeep Guggilam >Priority: Minor > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0 > > > There are few cases where the Master is not able to complete the > initialization as it waiting for the region server to report to it. The > region server actually reported to the master but the master rejected the > request because of clock skew issue though both of them are on same JVM > The Region server uses EnvironmentEdgeManager.currentTime to report the > current time and HMaster uses System.currentTimeMillis() to get the current > time for computation against the reported time by RS. We should also just > use EnvironmentEdgeManager even in Master as we are expected not to use > System.currentTime directly and instead go through EnvironmentEdgeManager > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
Apache-HBase commented on pull request #1933: URL: https://github.com/apache/hbase/pull/1933#issuecomment-647803397 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 26s | master passed | | +1 :green_heart: | compile | 1m 37s | master passed | | +1 :green_heart: | shadedjars | 6m 15s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 33s | hbase-client in master failed. | | -0 :warning: | javadoc | 0m 46s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 3s | the patch passed | | +1 :green_heart: | compile | 1m 55s | the patch passed | | +1 :green_heart: | javac | 1m 55s | the patch passed | | +1 :green_heart: | shadedjars | 7m 10s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 26s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 44s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 16s | hbase-client in the patch passed. | | -1 :x: | unit | 147m 30s | hbase-server in the patch failed. | | | | 181m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1933 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 454b58583597 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/testReport/ | | Max. process+thread count | 4118 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24615) MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the distribution for last bucket.
Rushabh Shah created HBASE-24615: Summary: MutableRangeHistogram#updateSnapshotRangeMetrics doesn't calculate the distribution for last bucket. Key: HBASE-24615 URL: https://issues.apache.org/jira/browse/HBASE-24615 Project: HBase Issue Type: Bug Components: metrics Affects Versions: 1.3.7 Reporter: Rushabh Shah We are not processing the distribution for last bucket. https://github.com/apache/hbase/blob/master/hbase-hadoop-compat/src/main/java/org/apache/hadoop/metrics2/lib/MutableRangeHistogram.java#L70 {code:java} public void updateSnapshotRangeMetrics(MetricsRecordBuilder metricsRecordBuilder, Snapshot snapshot) { long priorRange = 0; long cumNum = 0; final long[] ranges = getRanges(); final String rangeType = getRangeType(); for (int i = 0; i < ranges.length - 1; i++) { -> The bug lies here. We are not processing last bucket. long val = snapshot.getCountAtOrBelow(ranges[i]); if (val - cumNum > 0) { metricsRecordBuilder.addCounter( Interns.info(name + "_" + rangeType + "_" + priorRange + "-" + ranges[i], desc), val - cumNum); } priorRange = ranges[i]; cumNum = val; } long val = snapshot.getCount(); if (val - cumNum > 0) { metricsRecordBuilder.addCounter( Interns.info(name + "_" + rangeType + "_" + ranges[ranges.length - 1] + "-inf", desc), val - cumNum); } } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24567) Create release should url-encode all characters when building git uri
[ https://issues.apache.org/jira/browse/HBASE-24567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24567: - Fix Version/s: 3.0.0-alpha-1 Resolution: Fixed Status: Resolved (was: Patch Available) > Create release should url-encode all characters when building git uri > - > > Key: HBASE-24567 > URL: https://issues.apache.org/jira/browse/HBASE-24567 > Project: HBase > Issue Type: Task > Components: community >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1 > > > The release tool doesn't url encode all characters provided for > {{ASF_USERNAME, ASF_PASSWORD}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk merged pull request #1907: HBASE-24567 Create release should url-encode all characters when building git uri
ndimiduk merged pull request #1907: URL: https://github.com/apache/hbase/pull/1907 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase-operator-tools] ndimiduk commented on a change in pull request #69: HBASE-24587 hbck2 command should accept one or more files containing …
ndimiduk commented on a change in pull request #69: URL: https://github.com/apache/hbase-operator-tools/pull/69#discussion_r443855362 ## File path: hbase-hbck2/README.md ## @@ -137,12 +138,13 @@ Command: Returns the pid(s) of the created AssignProcedure(s) or -1 if none. If -i or --inputFiles is specified, pass one or more input file names. Each file contains encoded region names, one per line. For example: - $ HBCK2 assigns -i fileName1 fileName2 + $ HBCK2 -i assigns fileName1 fileName2 Review comment: The command comes after the `-i`? I think this was correct before the change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1954: HBASE-24231 Add hadoop 3.2.x in our support matrix
Apache-HBase commented on pull request #1954: URL: https://github.com/apache/hbase/pull/1954#issuecomment-647796904 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | ||| _ Patch Compile Tests _ | ||| _ Other Tests _ | | | | 1m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1954 | | Optional Tests | | | uname | Linux c9db2eebb45a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Max. process+thread count | 52 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1954: HBASE-24231 Add hadoop 3.2.x in our support matrix
Apache-HBase commented on pull request #1954: URL: https://github.com/apache/hbase/pull/1954#issuecomment-647797030 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | ||| _ Patch Compile Tests _ | ||| _ Other Tests _ | | | | 1m 46s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1954 | | Optional Tests | | | uname | Linux 12e8235ffa37 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Max. process+thread count | 46 (vs. ulimit of 12500) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1954/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (HBASE-24144) Update docs from master
[ https://issues.apache.org/jira/browse/HBASE-24144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reopened HBASE-24144: -- Reopening so as to include HBASE-24231. > Update docs from master > --- > > Key: HBASE-24144 > URL: https://issues.apache.org/jira/browse/HBASE-24144 > Project: HBase > Issue Type: Sub-task > Components: documentation >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 2.3.0 > > > Take a pass updating the docs. Have a look at what's on branch-2.2 and add > whatever updates we need from master. Consider refreshing branch-2 as well, > since it's been a while. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk opened a new pull request #1954: HBASE-24231 Add hadoop 3.2.x in our support matrix
ndimiduk opened a new pull request #1954: URL: https://github.com/apache/hbase/pull/1954 Add a line for hadoop-3.2.x. Values are based on the if-statement in our personality file, ``` if [[ "${PATCH_BRANCH}" = branch-1* ]]; then yetus_info "Setting Hadoop 3 versions to test based on branch-1.x rules." hbase_hadoop3_versions="" elif [[ "${PATCH_BRANCH}" = branch-2.0 ]] || [[ "${PATCH_BRANCH}" = branch-2.1 ]]; then yetus_info "Setting Hadoop 3 versions to test based on branch-2.0/branch-2.1 rules" if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then hbase_hadoop3_versions="3.0.3 3.1.2" else hbase_hadoop3_versions="3.0.3 3.1.1 3.1.2" fi else yetus_info "Setting Hadoop 3 versions to test based on branch-2.2+/master/feature branch rules" if [[ "${QUICK_HADOOPCHECK}" == "true" ]]; then hbase_hadoop3_versions="3.1.2 3.2.1" else hbase_hadoop3_versions="3.1.1 3.1.2 3.2.0 3.2.1" fi fi ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work started] (HBASE-24231) Add hadoop 3.2.x in our support matrix
[ https://issues.apache.org/jira/browse/HBASE-24231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-24231 started by Nick Dimiduk. > Add hadoop 3.2.x in our support matrix > -- > > Key: HBASE-24231 > URL: https://issues.apache.org/jira/browse/HBASE-24231 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Nick Dimiduk >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24231) Add hadoop 3.2.x in our support matrix
[ https://issues.apache.org/jira/browse/HBASE-24231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk reassigned HBASE-24231: Assignee: Nick Dimiduk > Add hadoop 3.2.x in our support matrix > -- > > Key: HBASE-24231 > URL: https://issues.apache.org/jira/browse/HBASE-24231 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Nick Dimiduk >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1952: HBASE-24612: Consider allowing a separate EventLoopGroup for acceptin…
Apache-HBase commented on pull request #1952: URL: https://github.com/apache/hbase/pull/1952#issuecomment-647791783 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 46s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 15s | branch-2 passed | | +1 :green_heart: | compile | 1m 7s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 4s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 43s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 45s | the patch passed | | +1 :green_heart: | compile | 1m 1s | the patch passed | | +1 :green_heart: | javac | 1m 1s | the patch passed | | +1 :green_heart: | shadedjars | 5m 34s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 146m 27s | hbase-server in the patch failed. | | | | 172m 31s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1952 | | JIRA Issue | HBASE-24612 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e0c0aeb72e31 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | Default Java | 1.8.0_232 | | unit | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-jdk8-hadoop2-check/output/patch-unit-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/testReport/ | | Max. process+thread count | 4356 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24581) Skip compaction request/check for replica regions at the early stage.
[ https://issues.apache.org/jira/browse/HBASE-24581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun updated HBASE-24581: - Summary: Skip compaction request/check for replica regions at the early stage. (was: Skip compaction request/check for replica regions at the ) > Skip compaction request/check for replica regions at the early stage. > - > > Key: HBASE-24581 > URL: https://issues.apache.org/jira/browse/HBASE-24581 > Project: HBase > Issue Type: Improvement > Components: read replicas >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > I found that in certain cases replica regions can trigger compaction, one > example as follows, need to check all places to avoid compaction for replica > regions. > {code:java} > @Override > public void postOpenDeployTasks(final PostOpenDeployContext context) throws > IOException { > HRegion r = context.getRegion(); > long openProcId = context.getOpenProcId(); > long masterSystemTime = context.getMasterSystemTime(); > rpcServices.checkOpen(); > LOG.info("Post open deploy tasks for {}, openProcId={}, > masterSystemTime={}", > r.getRegionInfo().getRegionNameAsString(), openProcId, masterSystemTime); > // Do checks to see if we need to compact (references or too many files) > // TODO: SHX, do not do this for replica regions? Otherwise, it is going to > lost data locality for primary regions. > for (HStore s : r.stores.values()) { > if (s.hasReferences() || s.needsCompaction()) { > this.compactSplitThread.requestSystemCompaction(r, s, "Opening Region"); > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24581) Skip compaction request/check for replica regions at the
[ https://issues.apache.org/jira/browse/HBASE-24581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun updated HBASE-24581: - Summary: Skip compaction request/check for replica regions at the (was: Replica regions should not trigger any compaction) > Skip compaction request/check for replica regions at the > - > > Key: HBASE-24581 > URL: https://issues.apache.org/jira/browse/HBASE-24581 > Project: HBase > Issue Type: Improvement > Components: read replicas >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > I found that in certain cases replica regions can trigger compaction, one > example as follows, need to check all places to avoid compaction for replica > regions. > {code:java} > @Override > public void postOpenDeployTasks(final PostOpenDeployContext context) throws > IOException { > HRegion r = context.getRegion(); > long openProcId = context.getOpenProcId(); > long masterSystemTime = context.getMasterSystemTime(); > rpcServices.checkOpen(); > LOG.info("Post open deploy tasks for {}, openProcId={}, > masterSystemTime={}", > r.getRegionInfo().getRegionNameAsString(), openProcId, masterSystemTime); > // Do checks to see if we need to compact (references or too many files) > // TODO: SHX, do not do this for replica regions? Otherwise, it is going to > lost data locality for primary regions. > for (HStore s : r.stores.values()) { > if (s.hasReferences() || s.needsCompaction()) { > this.compactSplitThread.requestSystemCompaction(r, s, "Opening Region"); > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24581) Replica regions should not trigger any compaction
[ https://issues.apache.org/jira/browse/HBASE-24581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huaxiang Sun updated HBASE-24581: - Issue Type: Improvement (was: Bug) Priority: Major (was: Critical) Debugged more, found out the that the check for replica region is there when it goes to store.selectCompaction. Since it is deep in the stack, it can be improved by moving the check at the very begin of the compaction process. > Replica regions should not trigger any compaction > - > > Key: HBASE-24581 > URL: https://issues.apache.org/jira/browse/HBASE-24581 > Project: HBase > Issue Type: Improvement > Components: read replicas >Affects Versions: 2.3.0 >Reporter: Huaxiang Sun >Assignee: Huaxiang Sun >Priority: Major > > I found that in certain cases replica regions can trigger compaction, one > example as follows, need to check all places to avoid compaction for replica > regions. > {code:java} > @Override > public void postOpenDeployTasks(final PostOpenDeployContext context) throws > IOException { > HRegion r = context.getRegion(); > long openProcId = context.getOpenProcId(); > long masterSystemTime = context.getMasterSystemTime(); > rpcServices.checkOpen(); > LOG.info("Post open deploy tasks for {}, openProcId={}, > masterSystemTime={}", > r.getRegionInfo().getRegionNameAsString(), openProcId, masterSystemTime); > // Do checks to see if we need to compact (references or too many files) > // TODO: SHX, do not do this for replica regions? Otherwise, it is going to > lost data locality for primary regions. > for (HStore s : r.stores.values()) { > if (s.hasReferences() || s.needsCompaction()) { > this.compactSplitThread.requestSystemCompaction(r, s, "Opening Region"); > } > } > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1924: HBASE-24552 Replica region needs to check if primary region directory…
Apache-HBase commented on pull request #1924: URL: https://github.com/apache/hbase/pull/1924#issuecomment-647765625 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 7s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 22s | master passed | | +1 :green_heart: | checkstyle | 1m 22s | master passed | | +1 :green_heart: | spotbugs | 2m 25s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 16s | the patch passed | | +1 :green_heart: | checkstyle | 1m 36s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 54s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 46s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 41m 36s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1924 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 249e7065ea54 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1924/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
ndimiduk commented on a change in pull request #1933: URL: https://github.com/apache/hbase/pull/1933#discussion_r443804878 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java ## @@ -1304,8 +1304,8 @@ private void checkAndGetTableName(byte[] encodeRegionName, AtomicReference procedureCall(tableName, request, - (s, c, req, done) -> s.mergeTableRegions(c, req, done), (resp) -> resp.getProcId(), +this.procedureCall(tableName, request, + MasterService.Interface::mergeTableRegions, MergeTableRegionsResponse::getProcId, Review comment: yes please for method references, ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -473,6 +473,10 @@ public void run() { // Cached clusterId on stand by masters to serve clusterID requests from clients. private final CachedClusterId cachedClusterId; + // Split/Merge Normalization plan executes asynchronously and the caller blocks on + // waiting max 5 sec for single plan to complete with success/failure. + private static final int NORMALIZATION_PLAN_WAIT_TIMEOUT = 5; Review comment: I'm still not convinced we should do this, per my other comment :) ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -473,6 +473,10 @@ public void run() { // Cached clusterId on stand by masters to serve clusterID requests from clients. private final CachedClusterId cachedClusterId; + // Split/Merge Normalization plan executes asynchronously and the caller blocks on + // waiting max 5 sec for single plan to complete with success/failure. + private static final int NORMALIZATION_PLAN_WAIT_TIMEOUT = 5; Review comment: nit: i like to include a unit in these types of constants. i.e., `NORMALIZATION_PLAN_WAIT_TIMEOUT_SEC`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on a change in pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
ndimiduk commented on a change in pull request #1933: URL: https://github.com/apache/hbase/pull/1933#discussion_r443803523 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1957,17 +1959,27 @@ public boolean normalizeRegions() throws IOException { continue; } - // as of this writing, `plan.execute()` is non-blocking, so there's no artificial rate- - // limiting of merge requests due to this serial loop. + // as of this writing, `plan.submit()` is non-blocking and uses Async Admin APIs to + // submit task , so there's no artificial rate- + // limiting of merge/split requests due to this serial loop. for (NormalizationPlan plan : plans) { -plan.execute(admin); +Future future = plan.submit(admin); +submittedPlanList.add(future); if (plan.getType() == PlanType.SPLIT) { splitPlanCount++; } else if (plan.getType() == PlanType.MERGE) { mergePlanCount++; } } } +for (Future submittedPlan : submittedPlanList) { + try { +submittedPlan.get(); + } catch (Exception e) { +normalizationPlanFailureCount++; Review comment: Maybe we don't actually want to log "plans succeeded". Maybe it's enough that we log "N plans submitted". I would be okay with that. Even better if the debug level could log the PIDs of the submitted plans, which, I believe, requires going through `MasterServices`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
Apache-HBase commented on pull request #1933: URL: https://github.com/apache/hbase/pull/1933#issuecomment-647747299 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 11s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 59s | master passed | | +1 :green_heart: | checkstyle | 1m 43s | master passed | | +1 :green_heart: | spotbugs | 3m 18s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 56s | the patch passed | | -0 :warning: | checkstyle | 1m 13s | hbase-server: The patch generated 2 new + 103 unchanged - 0 fixed = 105 total (was 103) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 27s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 3m 33s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 41m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1933 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 172619e6390c 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a79a1c83c | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-client hbase-server U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1933/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #1924: HBASE-24552 Replica region needs to check if primary region directory…
huaxiangsun commented on a change in pull request #1924: URL: https://github.com/apache/hbase/pull/1924#discussion_r443796817 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java ## @@ -338,6 +345,35 @@ protected Flow executeFromState(MasterProcedureEnv env, RegionStateTransitionSta try { switch (state) { case REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE: + + // Need to do some sanity check for replica region, if the region does not exist at file + // system, do not try to assign the replica region, log error and return. + // Do not rely on master's in-memory state, primary region got its own life, it can be + // closed, offline for various reasons. Review comment: Updated to check primary region's inmemory state for defense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24550) Passing '-h' or '--help' to bin/hbase doesn't do as expected
[ https://issues.apache.org/jira/browse/HBASE-24550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142379#comment-17142379 ] Hudson commented on HBASE-24550: Results for branch branch-1.4 [build #1229 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1229/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1229//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1229//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/1229//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 source release artifact{color} -- See build output for details. > Passing '-h' or '--help' to bin/hbase doesn't do as expected > > > Key: HBASE-24550 > URL: https://issues.apache.org/jira/browse/HBASE-24550 > Project: HBase > Issue Type: Bug > Components: Operability, shell >Reporter: Michael Stack >Assignee: wenfeiyi666 >Priority: Trivial > Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.1.10, 2.2.6 > > > If I do 'bin/hbase -h' or './bin/hbase --help', it doesn't dump usage as I'd > expect. Instead, the param gets passed direct to the jvm for it to spew > complaint that the param is unrecognized. > Should do the right thing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #1952: HBASE-24612: Consider allowing a separate EventLoopGroup for acceptin…
Apache-HBase commented on pull request #1952: URL: https://github.com/apache/hbase/pull/1952#issuecomment-647734557 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 57s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 18s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 4s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 32s | the patch passed | | -0 :warning: | checkstyle | 1m 13s | hbase-server: The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 32s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 13s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 35m 51s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1952 | | JIRA Issue | HBASE-24612 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 07994758f7f7 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 4506f8d8ab | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #1933: HBASE-24588 : Submit task for NormalizationPlan
virajjasani commented on a change in pull request #1933: URL: https://github.com/apache/hbase/pull/1933#discussion_r443783539 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java ## @@ -1957,17 +1959,27 @@ public boolean normalizeRegions() throws IOException { continue; } - // as of this writing, `plan.execute()` is non-blocking, so there's no artificial rate- - // limiting of merge requests due to this serial loop. + // as of this writing, `plan.submit()` is non-blocking and uses Async Admin APIs to + // submit task , so there's no artificial rate- + // limiting of merge/split requests due to this serial loop. for (NormalizationPlan plan : plans) { -plan.execute(admin); +Future future = plan.submit(admin); +submittedPlanList.add(future); if (plan.getType() == PlanType.SPLIT) { splitPlanCount++; } else if (plan.getType() == PlanType.MERGE) { mergePlanCount++; } } } +for (Future submittedPlan : submittedPlanList) { + try { +submittedPlan.get(); + } catch (Exception e) { +normalizationPlanFailureCount++; Review comment: Although going through client interface is not necessary, in some way it might simplify things: 1. We won't have to re-write the code to convert encodedRegionName to RegionInfo, derive TableName etc and then use `MasterServices` interface, which requires additional info. 2. Since our goal is to `submit` async plans and not `execute` blocking operations, Admin interface already has nice non-blocking utility, which is again something we will have to implement on our own if we directly want to use `MasterServices` (which does use ProcV2 but we want to finally log: x/y plans succeeded, and for that to happen, using `Future.get()` and handling Exceptions sound better plan). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] sguggilam commented on a change in pull request #1945: HBASE-24603: Make Zookeeper sync() call synchronous
sguggilam commented on a change in pull request #1945: URL: https://github.com/apache/hbase/pull/1945#discussion_r443775794 ## File path: hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java ## @@ -595,9 +604,28 @@ private void connectionEvent(WatchedEvent event) { * data of an existing node and delete or transition that node, utilizing the * previously read version and data. We want to ensure that the version read * is up-to-date from when we begin the operation. + * */ - public void sync(String path) throws KeeperException { -this.recoverableZooKeeper.sync(path, null, null); + public void syncOrTimeout(String path) throws KeeperException { +final CountDownLatch latch = new CountDownLatch(1); +long startTime = EnvironmentEdgeManager.currentTime(); +this.recoverableZooKeeper.sync(path, (i, s, o) -> latch.countDown(), null); Review comment: Why can't we instead pass ctx as the last argument instead of NULL during the actual API call ? Just in case we want to use this field somewhere else in future to get the context in callback. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1939: HBASE-24597 : Backport HBASE-24380 (Provide WAL splitting journal logging) (#1860)
Apache-HBase commented on pull request #1939: URL: https://github.com/apache/hbase/pull/1939#issuecomment-647723738 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -0 :warning: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-1 Compile Tests _ | | +1 :green_heart: | mvninstall | 9m 48s | branch-1 passed | | +1 :green_heart: | compile | 0m 40s | branch-1 passed with JDK v1.8.0_252 | | +1 :green_heart: | compile | 0m 45s | branch-1 passed with JDK v1.7.0_262 | | +1 :green_heart: | checkstyle | 1m 44s | branch-1 passed | | +1 :green_heart: | shadedjars | 3m 14s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 49s | branch-1 passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 0m 40s | branch-1 passed with JDK v1.7.0_262 | | +0 :ok: | spotbugs | 2m 58s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 56s | branch-1 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 58s | the patch passed | | +1 :green_heart: | compile | 0m 41s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javac | 0m 41s | the patch passed | | +1 :green_heart: | compile | 0m 46s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | javac | 0m 46s | the patch passed | | +1 :green_heart: | checkstyle | 1m 31s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 2m 55s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 4m 52s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2. | | +1 :green_heart: | javadoc | 0m 30s | the patch passed with JDK v1.8.0_252 | | +1 :green_heart: | javadoc | 0m 39s | the patch passed with JDK v1.7.0_262 | | +1 :green_heart: | findbugs | 2m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 124m 11s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | The patch does not generate ASF License warnings. | | | | 166m 13s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1939/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1939 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 13b4f69c3028 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1939/out/precommit/personality/provided.sh | | git revision | branch-1 / 655658c | | Default Java | 1.7.0_262 | | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 /usr/lib/jvm/zulu-7-amd64:1.7.0_262 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1939/2/testReport/ | | Max. process+thread count | 4334 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1939/2/console | | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1952: HBASE-24612: Consider allowing a separate EventLoopGroup for acceptin…
Apache-HBase commented on pull request #1952: URL: https://github.com/apache/hbase/pull/1952#issuecomment-647718498 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 20s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 44s | branch-2 passed | | +1 :green_heart: | compile | 1m 10s | branch-2 passed | | +1 :green_heart: | shadedjars | 6m 31s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 24s | the patch passed | | +1 :green_heart: | compile | 1m 8s | the patch passed | | +1 :green_heart: | javac | 1m 8s | the patch passed | | +1 :green_heart: | shadedjars | 6m 30s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 208m 20s | hbase-server in the patch passed. | | | | 237m 25s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.9 Server=19.03.9 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1952 | | JIRA Issue | HBASE-24612 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 36995c3f2c6a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / f3d47d3c8e | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/testReport/ | | Max. process+thread count | 2528 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
Apache-HBase commented on pull request #1953: URL: https://github.com/apache/hbase/pull/1953#issuecomment-647716294 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 15s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 58s | branch-2.3 passed | | +1 :green_heart: | checkstyle | 0m 20s | branch-2.3 passed | | +1 :green_heart: | spotbugs | 0m 43s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 32s | the patch passed | | -0 :warning: | checkstyle | 0m 19s | hbase-mapreduce: The patch generated 11 new + 44 unchanged - 0 fixed = 55 total (was 44) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 18m 48s | Patch does not cause any errors with Hadoop 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 0m 50s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 37m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1953 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux ed5269722af8 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 8a50f2e4ee | | checkstyle | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-mapreduce.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
Apache-HBase commented on pull request #1953: URL: https://github.com/apache/hbase/pull/1953#issuecomment-647716111 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 27s | branch-2.3 passed | | +1 :green_heart: | compile | 0m 30s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 5m 58s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 21s | hbase-mapreduce in branch-2.3 failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | +1 :green_heart: | javac | 0m 28s | the patch passed | | +1 :green_heart: | shadedjars | 6m 59s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 25s | hbase-mapreduce in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 12m 11s | hbase-mapreduce in the patch passed. | | | | 37m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1953 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux e711dccf74d8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 8a50f2e4ee | | Default Java | 2020-01-14 | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-mapreduce.txt | | javadoc | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-mapreduce.txt | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/testReport/ | | Max. process+thread count | 3777 (vs. ulimit of 12500) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
Apache-HBase commented on pull request #1953: URL: https://github.com/apache/hbase/pull/1953#issuecomment-647713880 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 39s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 49s | branch-2.3 passed | | +1 :green_heart: | compile | 0m 26s | branch-2.3 passed | | +1 :green_heart: | shadedjars | 5m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 20s | branch-2.3 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | shadedjars | 5m 1s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 13m 2s | hbase-mapreduce in the patch passed. | | | | 33m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1953 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 4e5104f6e616 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.3 / 8a50f2e4ee | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/testReport/ | | Max. process+thread count | 3612 (vs. ulimit of 12500) | | modules | C: hbase-mapreduce U: hbase-mapreduce | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1953/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] ndimiduk commented on pull request #1620: HBASE-23339 Release scripts should use forwarded gpg-agent
ndimiduk commented on pull request #1620: URL: https://github.com/apache/hbase/pull/1620#issuecomment-647710176 > the object directory messages are a side effect of using the shared objects. It shouldn't be listed as an error since git then immediately checks the alternates we provide and finds what it needs. I think the build failed for me due to this error. Will try it again for RC1. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24205) Create metric to know the number of reads that happens from memstore
[ https://issues.apache.org/jira/browse/HBASE-24205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142350#comment-17142350 ] Nick Dimiduk commented on HBASE-24205: -- I updated the fixVersion to include 2.3.0 so that I will remember this one in my issue filter. Thanks [~ram_krish]. > Create metric to know the number of reads that happens from memstore > > > Key: HBASE-24205 > URL: https://issues.apache.org/jira/browse/HBASE-24205 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 3.0.0-alpha-1 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.2.6 > > Attachments: screenshot.png, screenshot_tablevscf.png > > > A metric to identify number of reads that were served from memstore (atleast > if the gets can be accounted for) then it gives a value addition to know if > among the reads how much was targeted at the most recent data. > Currently the existing metric framework at region level should be enough but > we can also add a metric per store level. That will be more granular. > We can also expose this via HbTop. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24205) Create metric to know the number of reads that happens from memstore
[ https://issues.apache.org/jira/browse/HBASE-24205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-24205: - Fix Version/s: 2.3.0 > Create metric to know the number of reads that happens from memstore > > > Key: HBASE-24205 > URL: https://issues.apache.org/jira/browse/HBASE-24205 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 3.0.0-alpha-1 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.2.6 > > Attachments: screenshot.png, screenshot_tablevscf.png > > > A metric to identify number of reads that were served from memstore (atleast > if the gets can be accounted for) then it gives a value addition to know if > among the reads how much was targeted at the most recent data. > Currently the existing metric framework at region level should be enough but > we can also add a metric per store level. That will be more granular. > We can also expose this via HbTop. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] ndimiduk commented on a change in pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
ndimiduk commented on a change in pull request #1953: URL: https://github.com/apache/hbase/pull/1953#discussion_r443757191 ## File path: hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java ## @@ -137,6 +137,103 @@ public Job createSubmittableJob(Configuration conf) throws IOException { return job; } + /** + * Sets up the actual job. + * + * @param conf The current configuration. + * @param args The command line parameters. + * @return The newly created job. + * @throws IOException When setting up the job fails. + * @deprecated please use main method instead. Review comment: How about making the comment include some more details about the deprecation? Something like `As of release 2.3.0, this will be removed in HBase 4.0.0.` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] HorizonNet commented on a change in pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
HorizonNet commented on a change in pull request #1953: URL: https://github.com/apache/hbase/pull/1953#discussion_r443755808 ## File path: hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java ## @@ -137,6 +137,103 @@ public Job createSubmittableJob(Configuration conf) throws IOException { return job; } + /** + * Sets up the actual job. + * + * @param conf The current configuration. + * @param args The command line parameters. + * @return The newly created job. + * @throws IOException When setting up the job fails. + * @deprecated please use main method instead. Review comment: Please also include since which release it is deprecated. We had this case just now that we had a deprecation comment, which was not in line with our ref documentation. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani commented on a change in pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
virajjasani commented on a change in pull request #1953: URL: https://github.com/apache/hbase/pull/1953#discussion_r443753904 ## File path: hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java ## @@ -137,6 +137,103 @@ public Job createSubmittableJob(Configuration conf) throws IOException { return job; } + /** + * Sets up the actual job. + * + * @param conf The current configuration. + * @param args The command line parameters. + * @return The newly created job. + * @throws IOException When setting up the job fails. + * @deprecated please use main method instead. Review comment: We might want to suggest `This will be deprecated in 3.0 release`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24205) Create metric to know the number of reads that happens from memstore
[ https://issues.apache.org/jira/browse/HBASE-24205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142327#comment-17142327 ] ramkrishna.s.vasudevan commented on HBASE-24205: Thanks [~ndimiduk] . Ya I confirmed with [~zghao] before I pushed it. He was fine with pushing it there. Hence pushed it. > Create metric to know the number of reads that happens from memstore > > > Key: HBASE-24205 > URL: https://issues.apache.org/jira/browse/HBASE-24205 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 3.0.0-alpha-1 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.6 > > Attachments: screenshot.png, screenshot_tablevscf.png > > > A metric to identify number of reads that were served from memstore (atleast > if the gets can be accounted for) then it gives a value addition to know if > among the reads how much was targeted at the most recent data. > Currently the existing metric framework at region level should be enough but > we can also add a metric per store level. That will be more granular. > We can also expose this via HbTop. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24205) Create metric to know the number of reads that happens from memstore
[ https://issues.apache.org/jira/browse/HBASE-24205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142324#comment-17142324 ] Nick Dimiduk commented on HBASE-24205: -- The 2.3.0RC failed, so i'm okay with this on branch-2.3, go ahead. I'm surprised this was added to branch-2.2. It's a new feature/improvement, not a bug fix, I don't think it should be on a patch release. > Create metric to know the number of reads that happens from memstore > > > Key: HBASE-24205 > URL: https://issues.apache.org/jira/browse/HBASE-24205 > Project: HBase > Issue Type: Improvement > Components: metrics >Affects Versions: 3.0.0-alpha-1 >Reporter: ramkrishna.s.vasudevan >Assignee: ramkrishna.s.vasudevan >Priority: Major > Fix For: 3.0.0-alpha-1, 2.4.0, 2.2.6 > > Attachments: screenshot.png, screenshot_tablevscf.png > > > A metric to identify number of reads that were served from memstore (atleast > if the gets can be accounted for) then it gives a value addition to know if > among the reads how much was targeted at the most recent data. > Currently the existing metric framework at region level should be enough but > we can also add a metric per store level. That will be more granular. > We can also expose this via HbTop. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-21773) rowcounter utility should respond to pleas for help
[ https://issues.apache.org/jira/browse/HBASE-21773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142323#comment-17142323 ] Wellington Chevreuil commented on HBASE-21773: -- Created addendum [PR|https://github.com/apache/hbase/pull/1953] bringing back mentioned method. > rowcounter utility should respond to pleas for help > --- > > Key: HBASE-21773 > URL: https://issues.apache.org/jira/browse/HBASE-21773 > Project: HBase > Issue Type: Bug > Components: tooling >Affects Versions: 2.1.0 >Reporter: Sean Busbey >Assignee: Wellington Chevreuil >Priority: Critical > Fix For: 3.0.0-alpha-1, 2.3.0 > > Attachments: HBASE-21773.master.001.patch, > HBASE-21773.master.002.patch, HBASE-21773.master.003.patch, > HBASE-21773.master.004.patch > > > {{hbase rowcounter}} does not respond to reasonable requests for help, i.e. > {{--help}}, {{-h}}, or {{-?}} > {code} > [systest@busbey-training-1 root]$ hbase rowcounter -? > OpenJDK 64-Bit Server VM warning: Using incremental CMS is deprecated and > will likely be removed in a future release > 19/01/24 12:30:00 INFO client.RMProxy: Connecting to ResourceManager at > busbey-training-1.gce.cloudera.com/172.31.116.31:8032 > 19/01/24 12:30:01 INFO hdfs.DFSClient: Created token for systest: > HDFS_DELEGATION_TOKEN owner=syst...@gce.cloudera.com, renewer=yarn, > realUser=, issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8 on 172.31.116.31:8020 > 19/01/24 12:30:01 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-3.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.52:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361801965, maxDate=1548966601965, > sequenceNumber=5, masterKeyId=17)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: HDFS_DELEGATION_TOKEN, > Service: 172.31.116.31:8020, Ident: (token for systest: HDFS_DELEGATION_TOKEN > owner=syst...@gce.cloudera.com, renewer=yarn, realUser=, > issueDate=1548361801519, maxDate=1548966601519, sequenceNumber=3, > masterKeyId=8) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.52:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361801965, maxDate=1548966601965, sequenceNumber=5, > masterKeyId=17) > 19/01/24 12:30:02 INFO kms.KMSClientProvider: Getting new token from > https://busbey-training-4.gce.cloudera.com:16000/kms/v1/, > renewer:yarn/busbey-training-1.gce.cloudera@gce.cloudera.com > 19/01/24 12:30:02 INFO kms.KMSClientProvider: New token received: (Kind: > kms-dt, Service: 172.31.116.50:16000, Ident: (kms-dt owner=systest, > renewer=yarn, realUser=, issueDate=1548361802363, maxDate=1548966602363, > sequenceNumber=6, masterKeyId=18)) > 19/01/24 12:30:02 INFO security.TokenCache: Got dt for > hdfs://busbey-training-1.gce.cloudera.com:8020; Kind: kms-dt, Service: > 172.31.116.50:16000, Ident: (kms-dt owner=systest, renewer=yarn, realUser=, > issueDate=1548361802363, maxDate=1548966602363, sequenceNumber=6, > masterKeyId=18) > 19/01/24 12:30:02 INFO mapreduce.JobResourceUploader: Disabling Erasure > Coding for path: /user/systest/.staging/job_1548349234632_0003 > 19/01/24 12:30:03 INFO mapreduce.JobSubmitter: Cleaning up the staging area > /user/systest/.staging/job_1548349234632_0003 > Exception in thread "main" java.lang.IllegalArgumentException: Illegal first > character <45> at 0. User-space table qualifiers can only start with > 'alphanumeric characters' from any language: -? > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:193) > at > org.apache.hadoop.hbase.TableName.isLegalTableQualifierName(TableName.java:156) > at org.apache.hadoop.hbase.TableName.(TableName.java:346) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:382) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:469) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.initialize(TableInputFormat.java:198) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:243) > at > org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:254) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:310) > at > org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:327) > at >
[GitHub] [hbase] wchevreuil opened a new pull request #1953: HBASE-21773 - Addendum - Bring back "public static Job createSubmitta…
wchevreuil opened a new pull request #1953: URL: https://github.com/apache/hbase/pull/1953 …bleJob(Configuration conf, String[] args)" for compatibility reasons. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] virajjasani merged pull request #1939: HBASE-24597 : Backport HBASE-24380 (Provide WAL splitting journal logging) (#1860)
virajjasani merged pull request #1939: URL: https://github.com/apache/hbase/pull/1939 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #1945: HBASE-24603: Make Zookeeper sync() call synchronous
bharathv commented on a change in pull request #1945: URL: https://github.com/apache/hbase/pull/1945#discussion_r443737875 ## File path: hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java ## @@ -595,9 +604,28 @@ private void connectionEvent(WatchedEvent event) { * data of an existing node and delete or transition that node, utilizing the * previously read version and data. We want to ensure that the version read * is up-to-date from when we begin the operation. + * */ - public void sync(String path) throws KeeperException { -this.recoverableZooKeeper.sync(path, null, null); + public void syncOrTimeout(String path) throws KeeperException { +final CountDownLatch latch = new CountDownLatch(1); +long startTime = EnvironmentEdgeManager.currentTime(); +this.recoverableZooKeeper.sync(path, (i, s, o) -> latch.countDown(), null); Review comment: I kind of like the fact that the code is a bit concise (since there is no real logic in this lambda in this case) but I get the concern about readability. I don't have a strong preference though, I can switch it to an inline class. ## File path: hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java ## @@ -595,9 +604,28 @@ private void connectionEvent(WatchedEvent event) { * data of an existing node and delete or transition that node, utilizing the * previously read version and data. We want to ensure that the version read * is up-to-date from when we begin the operation. + * */ - public void sync(String path) throws KeeperException { -this.recoverableZooKeeper.sync(path, null, null); + public void syncOrTimeout(String path) throws KeeperException { +final CountDownLatch latch = new CountDownLatch(1); +long startTime = EnvironmentEdgeManager.currentTime(); +this.recoverableZooKeeper.sync(path, (i, s, o) -> latch.countDown(), null); +try { + if (!latch.await(zkSyncTimeout, TimeUnit.MILLISECONDS)) { +LOG.error("sync() operation to ZK timed out. Configured timeout: {}ms. This usually points " Review comment: Any particular reason this has to be WARN? This can potentially be a correctness issue at this point and hence I chose the higher log level. ## File path: hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java ## @@ -595,9 +604,28 @@ private void connectionEvent(WatchedEvent event) { * data of an existing node and delete or transition that node, utilizing the * previously read version and data. We want to ensure that the version read * is up-to-date from when we begin the operation. + * */ - public void sync(String path) throws KeeperException { -this.recoverableZooKeeper.sync(path, null, null); + public void syncOrTimeout(String path) throws KeeperException { +final CountDownLatch latch = new CountDownLatch(1); +long startTime = EnvironmentEdgeManager.currentTime(); +this.recoverableZooKeeper.sync(path, (i, s, o) -> latch.countDown(), null); Review comment: ok will do. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #1924: HBASE-24552 Replica region needs to check if primary region directory…
huaxiangsun commented on a change in pull request #1924: URL: https://github.com/apache/hbase/pull/1924#discussion_r443735473 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java ## @@ -338,6 +345,35 @@ protected Flow executeFromState(MasterProcedureEnv env, RegionStateTransitionSta try { switch (state) { case REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE: + + // Need to do some sanity check for replica region, if the region does not exist at file + // system, do not try to assign the replica region, log error and return. + // Do not rely on master's in-memory state, primary region got its own life, it can be + // closed, offline for various reasons. Review comment: Probably you are right here. The defense in master can be changed to check its in-memory data structures, if master is not aware of the primary region, then there is something wrong. I will move this check to region server' region open. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #1952: HBASE-24612: Consider allowing a separate EventLoopGroup for acceptin…
Apache-HBase commented on pull request #1952: URL: https://github.com/apache/hbase/pull/1952#issuecomment-647679579 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 41s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 42s | branch-2 passed | | +1 :green_heart: | compile | 0m 56s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 2s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 37s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 18s | the patch passed | | +1 :green_heart: | compile | 0m 54s | the patch passed | | +1 :green_heart: | javac | 0m 54s | the patch passed | | +1 :green_heart: | shadedjars | 4m 55s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 34s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 135m 22s | hbase-server in the patch passed. | | | | 158m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.11 Server=19.03.11 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/1952 | | JIRA Issue | HBASE-24612 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a677824be929 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / f3d47d3c8e | | Default Java | 1.8.0_232 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/testReport/ | | Max. process+thread count | 3770 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1952/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] huaxiangsun commented on a change in pull request #1924: HBASE-24552 Replica region needs to check if primary region directory…
huaxiangsun commented on a change in pull request #1924: URL: https://github.com/apache/hbase/pull/1924#discussion_r443727313 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/TransitRegionStateProcedure.java ## @@ -338,6 +345,35 @@ protected Flow executeFromState(MasterProcedureEnv env, RegionStateTransitionSta try { switch (state) { case REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE: + + // Need to do some sanity check for replica region, if the region does not exist at file + // system, do not try to assign the replica region, log error and return. + // Do not rely on master's in-memory state, primary region got its own life, it can be + // closed, offline for various reasons. + if (!RegionReplicaUtil.isDefaultReplica(regionNode.getRegionInfo())) { +MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem(); +RegionInfo replicaRI = regionNode.getRegionInfo(); +Path tableDir = CommonFSUtils.getTableDir(mfs.getRootDir(), regionNode.getTable()); +Path regionDir = FSUtils.getRegionDirFromTableDir(tableDir, replicaRI); +FileSystem fs = mfs.getFileSystem(); +// Check if primary region directory exists +if (!fs.exists(regionDir)) { + LOG.error( +"Cannot assign replica region {} because its primary region {} does not exist" + + " at Filesystem", replicaRI, + ServerRegionReplicaUtil.getRegionInfoForDefaultReplica(replicaRI)); + return Flow.NO_MORE_STATE; +} +// check if .regionInfo exists in primary region +Path regionInfoFile = new Path(regionDir, HRegionFileSystem.REGION_INFO_FILE); +if (!fs.exists(regionInfoFile)) { + LOG.error( +"Cannot assign replica region {} because region info file does not exist in its" + + " primary region {}", replicaRI, + ServerRegionReplicaUtil.getRegionInfoForDefaultReplica(replicaRI)); + return Flow.NO_MORE_STATE; +} + } Review comment: Yeah, this is planned. Hbck2 needs work to fix any inconsistency for read replica, including but not limited to 1). Check consistency for replica regions (holes/overlaps). Jiras need to be created. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24360) RollingBatchRestartRsAction loses track of dead servers
[ https://issues.apache.org/jira/browse/HBASE-24360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142293#comment-17142293 ] Bharath Vissapragada commented on HBASE-24360: -- Back ported to branch-1. Thanks for fixing this, Nick. > RollingBatchRestartRsAction loses track of dead servers > --- > > Key: HBASE-24360 > URL: https://issues.apache.org/jira/browse/HBASE-24360 > Project: HBase > Issue Type: Test > Components: integration tests >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 1.7.0 > > > {{RollingBatchRestartRsAction}} doesn't handle failure cases when tracking > its list of dead servers. The original author believed that a failure to > restart would result in a retry. However, by removing the dead server from > the failed list prematurely, that state is lost, and retry of that server > never occurs. Because this action doesn't ever look back to the current state > of the cluster, relying only on its local state for the current action > invocation, it never realizes the abandoned server is still dead. Instead, be > more careful to only remove the dead server from the list when the > {{startRs}} invocation claims to have been successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24231) Add hadoop 3.2.x in our support matrix
[ https://issues.apache.org/jira/browse/HBASE-24231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142295#comment-17142295 ] Nick Dimiduk commented on HBASE-24231: -- Is the existing line for "Hadoop-3.1.1+" not sufficient? Do we need to explicitly mention every hadoop minor release? +1, let's get it in. > Add hadoop 3.2.x in our support matrix > -- > > Key: HBASE-24231 > URL: https://issues.apache.org/jira/browse/HBASE-24231 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24360) RollingBatchRestartRsAction loses track of dead servers
[ https://issues.apache.org/jira/browse/HBASE-24360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-24360: - Fix Version/s: 1.7.0 > RollingBatchRestartRsAction loses track of dead servers > --- > > Key: HBASE-24360 > URL: https://issues.apache.org/jira/browse/HBASE-24360 > Project: HBase > Issue Type: Test > Components: integration tests >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.0, 1.7.0 > > > {{RollingBatchRestartRsAction}} doesn't handle failure cases when tracking > its list of dead servers. The original author believed that a failure to > restart would result in a retry. However, by removing the dead server from > the failed list prematurely, that state is lost, and retry of that server > never occurs. Because this action doesn't ever look back to the current state > of the cluster, relying only on its local state for the current action > invocation, it never realizes the abandoned server is still dead. Instead, be > more careful to only remove the dead server from the list when the > {{startRs}} invocation claims to have been successful. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22982) Send SIGSTOP to hang or SIGCONT to resume rs and add graceful rolling restart
[ https://issues.apache.org/jira/browse/HBASE-22982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142290#comment-17142290 ] Bharath Vissapragada commented on HBASE-22982: -- Backported to branch-1 with a caveat: GracefulRollingRestartRsAction was omitted due to a dependency on RegionMover class which itself is a significant chunk of backport. I was more interested in having the SUSPEND/RESUME operations available. > Send SIGSTOP to hang or SIGCONT to resume rs and add graceful rolling restart > - > > Key: HBASE-22982 > URL: https://issues.apache.org/jira/browse/HBASE-22982 > Project: HBase > Issue Type: Sub-task > Components: integration tests >Affects Versions: 3.0.0-alpha-1 >Reporter: Szabolcs Bukros >Assignee: Szabolcs Bukros >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0, 2.2.3, 1.7.0 > > > * Add a Chaos Monkey action that uses SIGSTOP and SIGCONT to hang and resume > a ratio of region servers. > * Add a Chaos Monkey action to simulate a rolling restart including > graceful_stop like functionality that unloads the regions from the server > before a restart and then places it under load again afterwards. > * Add these actions to the relevant monkeys -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] apurtell commented on a change in pull request #1945: HBASE-24603: Make Zookeeper sync() call synchronous
apurtell commented on a change in pull request #1945: URL: https://github.com/apache/hbase/pull/1945#discussion_r443718162 ## File path: hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java ## @@ -595,9 +604,28 @@ private void connectionEvent(WatchedEvent event) { * data of an existing node and delete or transition that node, utilizing the * previously read version and data. We want to ensure that the version read * is up-to-date from when we begin the operation. + * */ - public void sync(String path) throws KeeperException { -this.recoverableZooKeeper.sync(path, null, null); + public void syncOrTimeout(String path) throws KeeperException { +final CountDownLatch latch = new CountDownLatch(1); +long startTime = EnvironmentEdgeManager.currentTime(); +this.recoverableZooKeeper.sync(path, (i, s, o) -> latch.countDown(), null); +try { + if (!latch.await(zkSyncTimeout, TimeUnit.MILLISECONDS)) { +LOG.error("sync() operation to ZK timed out. Configured timeout: {}ms. This usually points " Review comment: This should be WARN. ## File path: hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKWatcher.java ## @@ -595,9 +604,28 @@ private void connectionEvent(WatchedEvent event) { * data of an existing node and delete or transition that node, utilizing the * previously read version and data. We want to ensure that the version read * is up-to-date from when we begin the operation. + * */ - public void sync(String path) throws KeeperException { -this.recoverableZooKeeper.sync(path, null, null); + public void syncOrTimeout(String path) throws KeeperException { +final CountDownLatch latch = new CountDownLatch(1); +long startTime = EnvironmentEdgeManager.currentTime(); +this.recoverableZooKeeper.sync(path, (i, s, o) -> latch.countDown(), null); Review comment: This has to be backported to branch-1, which requires Java 7 compatible code, and there's no real need for a lambda here. We have guaranteed code divergence between the branches in exchange for perhaps some improved readability. Is it worth it? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org