[jira] [Commented] (HBASE-28532) remove vulnerable slf4j-log4j12 dependency
[ https://issues.apache.org/jira/browse/HBASE-28532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839937#comment-17839937 ] Nikita Pande commented on HBASE-28532: -- [~guluo] agree since I was already updating sl4j.version to remove vulnerability , so updated the version as needed in https://issues.apache.org/jira/browse/HBASE-28531 > remove vulnerable slf4j-log4j12 dependency > -- > > Key: HBASE-28532 > URL: https://issues.apache.org/jira/browse/HBASE-28532 > Project: HBase > Issue Type: Improvement > Components: hbase-operator-tools >Reporter: Nikita Pande >Priority: Major > > slf4j-log4j12 is a bridge from SLF4J to Log4j 1.x. > Since log4j 1.x is vulnerable , so this needs to be removed. > > It is to be replaced with the log4j-slf4j-impl dependency, which is a bridge > from SLF4J to Log4j 2.x. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28150) CreateTableProcedure and DeleteTableProcedure should sleep a while before retrying
[ https://issues.apache.org/jira/browse/HBASE-28150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839934#comment-17839934 ] Hudson commented on HBASE-28150: Results for branch branch-2.6 [build #101 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/101/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/101/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/101/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/101/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.6/101/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CreateTableProcedure and DeleteTableProcedure should sleep a while before > retrying > -- > > Key: HBASE-28150 > URL: https://issues.apache.org/jira/browse/HBASE-28150 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Affects Versions: 2.4.14 >Reporter: chaijunjie >Assignee: chaijunjie >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > Attachments: HBASE-28150.patch > > > create a table, but it failed when execute CREATE_TABLE_WRITE_FS_LAYOUT, then > will try again and again, will write too many proc record to master:store, we > find num of the master WAL in oldWALs more than 13000.. > > Q: should add a suspend time logic for create table proc retry? i see > TransitRegionStateProcedure has the logic.. > > --- > sorry, i upload screenshot failed, just copy to here > {code:java} > // 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | > Closing region themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1688) > 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | Closed > themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1900) > 2023-10-12 12:34:35,360 | INFO | PEWorker-1 | Region directories are created > at hdfs://hacluster/hbase/.tmp for table themis:a | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:346) > 2023-10-12 12:34:35,362 | WARN | PEWorker-1 | Retriable error trying to > create table=themis:a state=CREATE_TABLE_WRITE_FS_LAYOUT | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:159) > java.io.IOException: Unable to move table from > temp=hdfs://hacluster/hbase/.tmp/data/themis/a to hbase > root=hdfs://hacluster/hbase/data/themis/a > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.moveTempDirectoryToHBaseRoot(CreateTableProcedure.java:391) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:350) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:318) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:121) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:75) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962) > at
Re: [PR] HBASE-28428 : ConnectionRegistry APIs should have timeout [hbase]
virajjasani commented on PR #5837: URL: https://github.com/apache/hbase/pull/5837#issuecomment-2071330417 @Apache9 how do we ensure the timeout is considered by AsynFuture? ConnectionRegistry APIs return AsyncFuture so if we implement timeout on AsyncFuture as part of connection registry implementation, then in reality all APIs would become synchronous and return actual values rather than wrapped with AsyncFuture right? Did I get your suggestion right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28521 Use standard ConnectionRegistry and Client API to get reg… [hbase]
Apache9 commented on code in PR #5825: URL: https://github.com/apache/hbase/pull/5825#discussion_r1575545867 ## hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java: ## @@ -224,36 +184,27 @@ public boolean isAborted() { * Get the list of all the region servers from the specified peer * @return list of region server addresses or an empty list if the slave is unavailable */ - protected List fetchSlavesAddresses() { -List children = null; + // will be overrided in tests so protected + protected Collection fetchPeerAddresses() { try { - synchronized (zkwLock) { -children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.getZNodePaths().rsZNode); - } -} catch (KeeperException ke) { - if (LOG.isDebugEnabled()) { -LOG.debug("Fetch slaves addresses failed", ke); - } - reconnect(ke); -} -if (children == null) { + return FutureUtils.get(conn.getAdmin().getRegionServers(true)); +} catch (IOException e) { + LOG.debug("Fetch peer addresses failed", e); return Collections.emptyList(); } -List addresses = new ArrayList<>(children.size()); -for (String child : children) { - addresses.add(ServerName.parseServerName(child)); -} -return addresses; } protected synchronized void chooseSinks() { Review Comment: There is a lazy refresh logic in our code. Once there are failures connecting to a region server in the remote peer cluster, we will increase the failure count of this region server. There is a Map in this class for tracking the failure counts. If there are too many failure region servers, we will refresh the region server list of the remote peer cluster. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28464: Make replication ZKWatcher config customizable in extens… [hbase]
Apache9 commented on PR #5785: URL: https://github.com/apache/hbase/pull/5785#issuecomment-2071265510 #5825 is not necessary for solving the problem for this PR, #5835 is enough, as after #5835, we are able to customize the ZKClientConfig through the Configuration map in the ReplicationPeerConfig. Let's get #5835 in and then rebase the PR here. #5825 is for solving a more general problem where we want to allow specify a remote cluster with something other than a zookeeper address and path. I think we should get it in for branch-2+, but I'm not sure whether we should get it in for branch-2.6/branch-2.5. Thanks @anmolnar ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
Apache-HBase commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2071092582 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-28463 Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 44s | HBASE-28463 passed | | +1 :green_heart: | compile | 0m 40s | HBASE-28463 passed | | +1 :green_heart: | shadedjars | 5m 47s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | HBASE-28463 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 27s | the patch passed | | +1 :green_heart: | compile | 0m 39s | the patch passed | | +1 :green_heart: | javac | 0m 39s | the patch passed | | +1 :green_heart: | shadedjars | 5m 43s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 239m 7s | hbase-server in the patch passed. | | | | 262m 40s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/9/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5829 | | JIRA Issue | HBASE-28468 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f3e44c8b0fb9 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-28463 / a2321ce9d8 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/9/testReport/ | | Max. process+thread count | 5372 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/9/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28150) CreateTableProcedure and DeleteTableProcedure should sleep a while before retrying
[ https://issues.apache.org/jira/browse/HBASE-28150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839861#comment-17839861 ] Hudson commented on HBASE-28150: Results for branch branch-3 [build #191 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CreateTableProcedure and DeleteTableProcedure should sleep a while before > retrying > -- > > Key: HBASE-28150 > URL: https://issues.apache.org/jira/browse/HBASE-28150 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Affects Versions: 2.4.14 >Reporter: chaijunjie >Assignee: chaijunjie >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > Attachments: HBASE-28150.patch > > > create a table, but it failed when execute CREATE_TABLE_WRITE_FS_LAYOUT, then > will try again and again, will write too many proc record to master:store, we > find num of the master WAL in oldWALs more than 13000.. > > Q: should add a suspend time logic for create table proc retry? i see > TransitRegionStateProcedure has the logic.. > > --- > sorry, i upload screenshot failed, just copy to here > {code:java} > // 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | > Closing region themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1688) > 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | Closed > themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1900) > 2023-10-12 12:34:35,360 | INFO | PEWorker-1 | Region directories are created > at hdfs://hacluster/hbase/.tmp for table themis:a | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:346) > 2023-10-12 12:34:35,362 | WARN | PEWorker-1 | Retriable error trying to > create table=themis:a state=CREATE_TABLE_WRITE_FS_LAYOUT | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:159) > java.io.IOException: Unable to move table from > temp=hdfs://hacluster/hbase/.tmp/data/themis/a to hbase > root=hdfs://hacluster/hbase/data/themis/a > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.moveTempDirectoryToHBaseRoot(CreateTableProcedure.java:391) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:350) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:318) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:121) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:75) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962) > at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:221) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1988) > 2023-10-12 12:34:35,387 | INFO | PEWorker-1 | pid=917, >
[jira] [Commented] (HBASE-28215) Region reopen procedure should support some sort of throttling
[ https://issues.apache.org/jira/browse/HBASE-28215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839860#comment-17839860 ] Hudson commented on HBASE-28215: Results for branch branch-3 [build #191 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Region reopen procedure should support some sort of throttling > -- > > Key: HBASE-28215 > URL: https://issues.apache.org/jira/browse/HBASE-28215 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Reporter: Ray Mattingly >Assignee: Ray Mattingly >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1 > > > The mass reopening of regions caused by a table descriptor modification can > be quite disruptive. For latency/error sensitive workloads, like our user > facing traffic, we need to be very careful about when we modify table > descriptors, and it can be virtually impossible to do it painlessly for busy > tables. > It would be nice if we supported configurable batching/throttling of > reopenings so that the amplitude of any disruption can be kept relatively > small. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28497) Missing fields in Get.toJSON
[ https://issues.apache.org/jira/browse/HBASE-28497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839862#comment-17839862 ] Hudson commented on HBASE-28497: Results for branch branch-3 [build #191 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-3/191/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Missing fields in Get.toJSON > > > Key: HBASE-28497 > URL: https://issues.apache.org/jira/browse/HBASE-28497 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Chandra Sekhar K >Assignee: Chandra Sekhar K >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9 > > > Missing fields in Get.toJSON conversion. > |Class|Whether Mapped to JSON?|add to json?| > |Get| | | > |row|Yes| | > |maxVersions|Yes| | > |cacheBlocks|Yes| | > |storeLimit|No|Yes| > |storeOffset|No|Yes| > |tr|Yes| | > |checkExistenceOnly|No|Yes| > |familyMap|Yes| | > | | | | > |Query| | | > |filter|Yes| | > |targetReplicaId|No|Yes| > |consistency|No|Yes| > |colFamTimeRangeMap|No|Yes| > |loadColumnFamiliesOnDemand|No|Yes| > | | | | > |OperationWithAttributes| | | > |attributes|partial, only ID attribute is set|Yes| > |priority|No|Yes| -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28497) Missing fields in Get.toJSON
[ https://issues.apache.org/jira/browse/HBASE-28497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839842#comment-17839842 ] Hudson commented on HBASE-28497: Results for branch branch-2.5 [build #515 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Missing fields in Get.toJSON > > > Key: HBASE-28497 > URL: https://issues.apache.org/jira/browse/HBASE-28497 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Chandra Sekhar K >Assignee: Chandra Sekhar K >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9 > > > Missing fields in Get.toJSON conversion. > |Class|Whether Mapped to JSON?|add to json?| > |Get| | | > |row|Yes| | > |maxVersions|Yes| | > |cacheBlocks|Yes| | > |storeLimit|No|Yes| > |storeOffset|No|Yes| > |tr|Yes| | > |checkExistenceOnly|No|Yes| > |familyMap|Yes| | > | | | | > |Query| | | > |filter|Yes| | > |targetReplicaId|No|Yes| > |consistency|No|Yes| > |colFamTimeRangeMap|No|Yes| > |loadColumnFamiliesOnDemand|No|Yes| > | | | | > |OperationWithAttributes| | | > |attributes|partial, only ID attribute is set|Yes| > |priority|No|Yes| -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28150) CreateTableProcedure and DeleteTableProcedure should sleep a while before retrying
[ https://issues.apache.org/jira/browse/HBASE-28150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839843#comment-17839843 ] Hudson commented on HBASE-28150: Results for branch branch-2.5 [build #515 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/515/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CreateTableProcedure and DeleteTableProcedure should sleep a while before > retrying > -- > > Key: HBASE-28150 > URL: https://issues.apache.org/jira/browse/HBASE-28150 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Affects Versions: 2.4.14 >Reporter: chaijunjie >Assignee: chaijunjie >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > Attachments: HBASE-28150.patch > > > create a table, but it failed when execute CREATE_TABLE_WRITE_FS_LAYOUT, then > will try again and again, will write too many proc record to master:store, we > find num of the master WAL in oldWALs more than 13000.. > > Q: should add a suspend time logic for create table proc retry? i see > TransitRegionStateProcedure has the logic.. > > --- > sorry, i upload screenshot failed, just copy to here > {code:java} > // 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | > Closing region themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1688) > 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | Closed > themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1900) > 2023-10-12 12:34:35,360 | INFO | PEWorker-1 | Region directories are created > at hdfs://hacluster/hbase/.tmp for table themis:a | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:346) > 2023-10-12 12:34:35,362 | WARN | PEWorker-1 | Retriable error trying to > create table=themis:a state=CREATE_TABLE_WRITE_FS_LAYOUT | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:159) > java.io.IOException: Unable to move table from > temp=hdfs://hacluster/hbase/.tmp/data/themis/a to hbase > root=hdfs://hacluster/hbase/data/themis/a > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.moveTempDirectoryToHBaseRoot(CreateTableProcedure.java:391) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:350) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:318) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:121) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:75) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962) > at
[jira] [Commented] (HBASE-28150) CreateTableProcedure and DeleteTableProcedure should sleep a while before retrying
[ https://issues.apache.org/jira/browse/HBASE-28150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839840#comment-17839840 ] Hudson commented on HBASE-28150: Results for branch master [build #1058 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > CreateTableProcedure and DeleteTableProcedure should sleep a while before > retrying > -- > > Key: HBASE-28150 > URL: https://issues.apache.org/jira/browse/HBASE-28150 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Affects Versions: 2.4.14 >Reporter: chaijunjie >Assignee: chaijunjie >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 3.0.0-beta-2, 2.5.9 > > Attachments: HBASE-28150.patch > > > create a table, but it failed when execute CREATE_TABLE_WRITE_FS_LAYOUT, then > will try again and again, will write too many proc record to master:store, we > find num of the master WAL in oldWALs more than 13000.. > > Q: should add a suspend time logic for create table proc retry? i see > TransitRegionStateProcedure has the logic.. > > --- > sorry, i upload screenshot failed, just copy to here > {code:java} > // 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | > Closing region themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1688) > 2023-10-12 12:34:35,360 | INFO | RegionOpenAndInit-themis:a-pool-0 | Closed > themis:a,,1697025107991.513d3d5b4d3ad5c8f13bacea4a888d69. | > org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1900) > 2023-10-12 12:34:35,360 | INFO | PEWorker-1 | Region directories are created > at hdfs://hacluster/hbase/.tmp for table themis:a | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:346) > 2023-10-12 12:34:35,362 | WARN | PEWorker-1 | Retriable error trying to > create table=themis:a state=CREATE_TABLE_WRITE_FS_LAYOUT | > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:159) > java.io.IOException: Unable to move table from > temp=hdfs://hacluster/hbase/.tmp/data/themis/a to hbase > root=hdfs://hacluster/hbase/data/themis/a > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.moveTempDirectoryToHBaseRoot(CreateTableProcedure.java:391) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:350) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.createFsLayout(CreateTableProcedure.java:318) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:121) > at > org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:75) > at > org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) > at > org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1650) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1396) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1000(ProcedureExecutor.java:75) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.runProcedure(ProcedureExecutor.java:1962) > at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:221) > at > org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1988) > 2023-10-12 12:34:35,387 | INFO | PEWorker-1 | pid=917, >
[jira] [Commented] (HBASE-28497) Missing fields in Get.toJSON
[ https://issues.apache.org/jira/browse/HBASE-28497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839839#comment-17839839 ] Hudson commented on HBASE-28497: Results for branch master [build #1058 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Missing fields in Get.toJSON > > > Key: HBASE-28497 > URL: https://issues.apache.org/jira/browse/HBASE-28497 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Chandra Sekhar K >Assignee: Chandra Sekhar K >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9 > > > Missing fields in Get.toJSON conversion. > |Class|Whether Mapped to JSON?|add to json?| > |Get| | | > |row|Yes| | > |maxVersions|Yes| | > |cacheBlocks|Yes| | > |storeLimit|No|Yes| > |storeOffset|No|Yes| > |tr|Yes| | > |checkExistenceOnly|No|Yes| > |familyMap|Yes| | > | | | | > |Query| | | > |filter|Yes| | > |targetReplicaId|No|Yes| > |consistency|No|Yes| > |colFamTimeRangeMap|No|Yes| > |loadColumnFamiliesOnDemand|No|Yes| > | | | | > |OperationWithAttributes| | | > |attributes|partial, only ID attribute is set|Yes| > |priority|No|Yes| -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28215) Region reopen procedure should support some sort of throttling
[ https://issues.apache.org/jira/browse/HBASE-28215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839838#comment-17839838 ] Hudson commented on HBASE-28215: Results for branch master [build #1058 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/1058/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Region reopen procedure should support some sort of throttling > -- > > Key: HBASE-28215 > URL: https://issues.apache.org/jira/browse/HBASE-28215 > Project: HBase > Issue Type: Improvement > Components: master, proc-v2 >Reporter: Ray Mattingly >Assignee: Ray Mattingly >Priority: Major > Fix For: 2.6.0, 3.0.0-beta-1 > > > The mass reopening of regions caused by a table descriptor modification can > be quite disruptive. For latency/error sensitive workloads, like our user > facing traffic, we need to be very careful about when we modify table > descriptors, and it can be virtually impossible to do it painlessly for busy > tables. > It would be nice if we supported configurable batching/throttling of > reopenings so that the amplitude of any disruption can be kept relatively > small. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28521 Use standard ConnectionRegistry and Client API to get reg… [hbase]
anmolnar commented on code in PR #5825: URL: https://github.com/apache/hbase/pull/5825#discussion_r1575308563 ## hbase-server/src/main/java/org/apache/hadoop/hbase/replication/HBaseReplicationEndpoint.java: ## @@ -224,36 +184,27 @@ public boolean isAborted() { * Get the list of all the region servers from the specified peer * @return list of region server addresses or an empty list if the slave is unavailable */ - protected List fetchSlavesAddresses() { -List children = null; + // will be overrided in tests so protected + protected Collection fetchPeerAddresses() { try { - synchronized (zkwLock) { -children = ZKUtil.listChildrenAndWatchForNewChildren(zkw, zkw.getZNodePaths().rsZNode); - } -} catch (KeeperException ke) { - if (LOG.isDebugEnabled()) { -LOG.debug("Fetch slaves addresses failed", ke); - } - reconnect(ke); -} -if (children == null) { + return FutureUtils.get(conn.getAdmin().getRegionServers(true)); +} catch (IOException e) { + LOG.debug("Fetch peer addresses failed", e); return Collections.emptyList(); } -List addresses = new ArrayList<>(children.size()); -for (String child : children) { - addresses.add(ServerName.parseServerName(child)); -} -return addresses; } protected synchronized void chooseSinks() { Review Comment: You have removed `PeerRegionServerListener` and I don't see anything replacing it. How will the replication endpoint notice if region servers are changed? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
Apache-HBase commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2070706655 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 17s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-28463 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 47s | HBASE-28463 passed | | +1 :green_heart: | compile | 2m 40s | HBASE-28463 passed | | +1 :green_heart: | checkstyle | 0m 45s | HBASE-28463 passed | | +1 :green_heart: | spotless | 0m 47s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 36s | HBASE-28463 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 43s | the patch passed | | +1 :green_heart: | compile | 3m 32s | the patch passed | | +1 :green_heart: | javac | 3m 32s | the patch passed | | -0 :warning: | checkstyle | 0m 43s | hbase-server: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 6m 12s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 0m 43s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 39s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 33m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/9/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5829 | | JIRA Issue | HBASE-28468 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 6d01abe693f9 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-28463 / a2321ce9d8 | | Default Java | Eclipse Adoptium-11.0.17+8 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/9/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 82 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/9/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28529 Use ZKClientConfig instead of system properties when sett… [hbase]
BukrosSzabolcs commented on PR #5835: URL: https://github.com/apache/hbase/pull/5835#issuecomment-2070648416 @Apache9 I like the flexibility of your solution, it would solve HBASE-28464. Thanks for taking the time and doing this! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
Apache-HBase commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2070581974 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 35s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-28463 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 52s | HBASE-28463 passed | | +1 :green_heart: | compile | 1m 6s | HBASE-28463 passed | | +1 :green_heart: | shadedjars | 6m 54s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 31s | HBASE-28463 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 41s | the patch passed | | +1 :green_heart: | compile | 1m 0s | the patch passed | | +1 :green_heart: | javac | 1m 0s | the patch passed | | +1 :green_heart: | shadedjars | 6m 34s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 27s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 270m 9s | hbase-server in the patch failed. | | | | 299m 41s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5829 | | JIRA Issue | HBASE-28468 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux c0c985c7df5d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-28463 / a2321ce9d8 | | Default Java | Eclipse Adoptium-11.0.17+8 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/testReport/ | | Max. process+thread count | 4635 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28420 update the procedure's field to store for ServerRemoteProcedure [hbase]
Umeshkumar9414 commented on code in PR #5816: URL: https://github.com/apache/hbase/pull/5816#discussion_r1575047329 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerRemoteProcedure.java: ## @@ -113,17 +117,20 @@ protected synchronized void completionCleanup(MasterProcedureEnv env) { @Override public synchronized void remoteCallFailed(MasterProcedureEnv env, ServerName serverName, IOException exception) { +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_DISPATCH_FAIL; remoteOperationDone(env, exception); } @Override public synchronized void remoteOperationCompleted(MasterProcedureEnv env) { +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_REPORT_SUCCEED; remoteOperationDone(env, null); } @Override public synchronized void remoteOperationFailed(MasterProcedureEnv env, RemoteProcedureException error) { +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_SERVER_CRASH; Review Comment: I went through the code and found that this is not the case(you are right it is not a server crash). Let me change it in that scenario. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
Apache-HBase commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2070475898 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ HBASE-28463 Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 36s | HBASE-28463 passed | | +1 :green_heart: | compile | 0m 42s | HBASE-28463 passed | | +1 :green_heart: | shadedjars | 5m 10s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 26s | HBASE-28463 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 28s | the patch passed | | +1 :green_heart: | compile | 0m 42s | the patch passed | | +1 :green_heart: | javac | 0m 42s | the patch passed | | +1 :green_heart: | shadedjars | 5m 10s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 246m 56s | hbase-server in the patch failed. | | | | 269m 52s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5829 | | JIRA Issue | HBASE-28468 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 98ce70616485 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-28463 / a2321ce9d8 | | Default Java | Temurin-1.8.0_352-b08 | | unit | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/testReport/ | | Max. process+thread count | 4411 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (HBASE-28544) org.apache.hadoop.hbase.rest.PerformanceEvaluation does not evaluate REST performance
[ https://issues.apache.org/jira/browse/HBASE-28544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth reassigned HBASE-28544: --- Assignee: Istvan Toth > org.apache.hadoop.hbase.rest.PerformanceEvaluation does not evaluate REST > performance > - > > Key: HBASE-28544 > URL: https://issues.apache.org/jira/browse/HBASE-28544 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > org.apache.hadoop.hbase.rest.PerformanceEvaluation only uses the REST > interface for Admin tasks like creating tables. > All data access is done via the native RPC client, which makes the whole tool > a big red herring. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28464: Make replication ZKWatcher config customizable in extens… [hbase]
anmolnar commented on PR #5785: URL: https://github.com/apache/hbase/pull/5785#issuecomment-2070301813 Yep, just checked it and that's exactly what @Apache9 's patch addresses. I suggest submitting #5825 and #5835 first, then rebase this patch and we'll be fine. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28464: Make replication ZKWatcher config customizable in extens… [hbase]
anmolnar commented on PR #5785: URL: https://github.com/apache/hbase/pull/5785#issuecomment-2070253681 > Mind taking a look at [HBASE-28521](https://issues.apache.org/jira/browse/HBASE-28521)? In general, we want to avoid leaking the internal zookeeper, so for replication, we want to use the standard connection registry API to connect to peer cluster, for compatibilitu, we will use ZKConnectionRegistry by default. > > But here after this change, we force hbase replication endpoint to must use zookeeper, which is not very good. I suppose we should find another way to customize the ZKWatcher creation. IIRC @anmolnar has done something related to this area, i.e, how to set zookeeper configurations in hbase configuration, with a special prefix? > > Thanks. The big difference between using `ZKClientConfig` or not using it is that passing the client config enables using different ZooKeeper client configurations in the same JVM process. This is especially useful in a replication scenario where you need to maintain two separate ZK connection potentially with different TLS settings for instance. I believe this is the improvement that @BukrosSzabolcs implemented here. Using the Connection Registry is also a very good improvement, but at the same time we should focus the above as well. My patch added the ability to set ZK system properties via `hbase-site.xml`, but that doesn't solve the problem of multiple ZK connections. I haven't looked into `ZKConnectionRegistry` yet, but we might want to introduce using custom `ZKClientConfig` in there instead of here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28464: Make replication ZKWatcher config customizable in extens… [hbase]
anmolnar commented on code in PR #5785: URL: https://github.com/apache/hbase/pull/5785#discussion_r1575062555 ## hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/RecoverableZooKeeper.java: ## @@ -110,6 +112,23 @@ public static RecoverableZooKeeper connect(Configuration conf, String ensemble, */ public static RecoverableZooKeeper connect(Configuration conf, String ensemble, Watcher watcher, final String identifier) throws IOException { +return connect(conf, ensemble, watcher, identifier, null); Review Comment: `ZKClientConfig` param is marked as Nullable in ZK code, so I think it's fine to pass null here. However you can also use the method overload _without_ client config parameter if you don't want to customize it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28420 update the procedure's field to store for ServerRemoteProcedure [hbase]
Umeshkumar9414 commented on code in PR #5816: URL: https://github.com/apache/hbase/pull/5816#discussion_r1575047329 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerRemoteProcedure.java: ## @@ -113,17 +117,20 @@ protected synchronized void completionCleanup(MasterProcedureEnv env) { @Override public synchronized void remoteCallFailed(MasterProcedureEnv env, ServerName serverName, IOException exception) { +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_DISPATCH_FAIL; remoteOperationDone(env, exception); } @Override public synchronized void remoteOperationCompleted(MasterProcedureEnv env) { +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_REPORT_SUCCEED; remoteOperationDone(env, null); } @Override public synchronized void remoteOperationFailed(MasterProcedureEnv env, RemoteProcedureException error) { +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_SERVER_CRASH; Review Comment: I go through the code and find that this is not the case. Let me change it in that scenario. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-28521) Use standard ConnectionRegistry and Client API to get region server list in in replication
[ https://issues.apache.org/jira/browse/HBASE-28521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839773#comment-17839773 ] Andor Molnar commented on HBASE-28521: -- I really welcome this change. +1 > Use standard ConnectionRegistry and Client API to get region server list in > in replication > -- > > Key: HBASE-28521 > URL: https://issues.apache.org/jira/browse/HBASE-28521 > Project: HBase > Issue Type: Improvement > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-beta-2 > > > This is for allowing specify cluster key without zookeeper in replication > peer config. > For now, we will set a watcher on zookeeper for fetching the region server > list for the remote cluster, this means we must know the zookeeper address of > the remote cluster. This should be fixed as we do not want to leak the > zookeeper outside the cluster itself. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28544) org.apache.hadoop.hbase.rest.PerformanceEvaluation does not evaluate REST performance
Istvan Toth created HBASE-28544: --- Summary: org.apache.hadoop.hbase.rest.PerformanceEvaluation does not evaluate REST performance Key: HBASE-28544 URL: https://issues.apache.org/jira/browse/HBASE-28544 Project: HBase Issue Type: Bug Components: REST Reporter: Istvan Toth org.apache.hadoop.hbase.rest.PerformanceEvaluation only uses the REST interface for Admin tasks like creating tables. All data access is done via the native RPC client, which makes the whole tool a big red herring. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
Apache9 commented on PR #5770: URL: https://github.com/apache/hbase/pull/5770#issuecomment-2069590363 > Given there is more work to do here, might as well keep it to 2.7.0+ Got it. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
bbeaudreault commented on PR #5770: URL: https://github.com/apache/hbase/pull/5770#issuecomment-2069584184 Given there is more work to do here, might as well keep it to 2.7.0+ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
Apache9 commented on PR #5770: URL: https://github.com/apache/hbase/pull/5770#issuecomment-2069567572 Thanks @ndimiduk . Will merge recently if no other concerns. @bbeaudreault My plan is to apply this to branch-2+, i.e, 2.7.0+, since there are still other under going works to make this feature ready, especially that we still need to add more documentation about this feature. Please let me know if you want this in 2.6.0 too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
Apache-HBase commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2069564603 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-28463 Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 51s | HBASE-28463 passed | | +1 :green_heart: | compile | 2m 31s | HBASE-28463 passed | | +1 :green_heart: | checkstyle | 0m 36s | HBASE-28463 passed | | +1 :green_heart: | spotless | 0m 42s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 1m 32s | HBASE-28463 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 45s | the patch passed | | +1 :green_heart: | compile | 2m 29s | the patch passed | | +1 :green_heart: | javac | 2m 29s | the patch passed | | -0 :warning: | checkstyle | 0m 37s | hbase-server: The patch generated 19 new + 14 unchanged - 0 fixed = 33 total (was 14) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 4m 57s | Patch does not cause any errors with Hadoop 3.3.6. | | -1 :x: | spotless | 0m 36s | patch has 65 errors when running spotless:check, run spotless:apply to fix. | | +1 :green_heart: | spotbugs | 1m 40s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 11s | The patch does not generate ASF License warnings. | | | | 27m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5829 | | JIRA Issue | HBASE-28468 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux 9de76c7a970c 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-28463 / a2321ce9d8 | | Default Java | Eclipse Adoptium-11.0.17+8 | | checkstyle | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | spotless | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/artifact/yetus-general-check/output/patch-spotless.txt | | Max. process+thread count | 81 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5829/8/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
jhungund commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2069465706 > Please rebase your local branch with the current state of remote [HBASE-28463](https://issues.apache.org/jira/browse/HBASE-28463) then force push your changes to resolve the conflicts. Done! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
jhungund commented on code in PR #5829: URL: https://github.com/apache/hbase/pull/5829#discussion_r1574757909 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java: ## @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.OptionalLong; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The DataTieringManager class categorizes data into hot data and cold data based on the specified + * {@link DataTieringType} when DataTiering is enabled. DataTiering is disabled by default with + * {@link DataTieringType} set to {@link DataTieringType#NONE}. The {@link DataTieringType} + * determines the logic for distinguishing data into hot or cold. By default, all data is considered + * as hot. + */ +@InterfaceAudience.Private +public class DataTieringManager { + private static final Logger LOG = LoggerFactory.getLogger(DataTieringManager.class); + public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type"; + public static final String DATATIERING_HOT_DATA_AGE_KEY = +"hbase.hstore.datatiering.hot.age.millis"; + public static final DataTieringType DEFAULT_DATATIERING = DataTieringType.NONE; + public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 * 1000; // 7 Days + private static DataTieringManager instance; + private final Map onlineRegions; + + private DataTieringManager(Map onlineRegions) { +this.onlineRegions = onlineRegions; + } + + /** + * Initializes the DataTieringManager instance with the provided map of online regions. + * @param onlineRegions A map containing online regions. + */ + public static synchronized void instantiate(Map onlineRegions) { +if (instance == null) { + instance = new DataTieringManager(onlineRegions); + LOG.info("DataTieringManager instantiated successfully."); +} else { + LOG.warn("DataTieringManager is already instantiated."); +} + } + + /** + * Retrieves the instance of DataTieringManager. + * @return The instance of DataTieringManager. + * @throws IllegalStateException if DataTieringManager has not been instantiated. + */ + public static synchronized DataTieringManager getInstance() { +if (instance == null) { + throw new IllegalStateException( +"DataTieringManager has not been instantiated. Call instantiate() first."); +} +return instance; + } + + /** + * Determines whether data tiering is enabled for the given block cache key. + * @param key the block cache key + * @return {@code true} if data tiering is enabled for the HFile associated with the key, + * {@code false} otherwise + * @throws DataTieringException if there is an error retrieving the HFile path or configuration + */ + public boolean isDataTieringEnabled(BlockCacheKey key) throws DataTieringException { +Path hFilePath = key.getFilePath(); +if (hFilePath == null) { + throw new DataTieringException("BlockCacheKey Doesn't Contain HFile Path"); +} +return isDataTieringEnabled(hFilePath); + } + + /** + * Determines whether data tiering is enabled for the given HFile path. + * @param hFilePath the path to the HFile + * @return {@code true} if data tiering is enabled, {@code false} otherwise + * @throws DataTieringException if there is an error retrieving the configuration + */ + public boolean isDataTieringEnabled(Path hFilePath) throws DataTieringException { +Configuration configuration = getConfiguration(hFilePath); +DataTieringType dataTieringType =
[jira] [Comment Edited] (HBASE-17040) HBase Spark does not work in Kerberos and yarn-master mode
[ https://issues.apache.org/jira/browse/HBASE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839654#comment-17839654 ] Nikita Pande edited comment on HBASE-17040 at 4/22/24 1:13 PM: --- Hi, I have faced an issue when I wrote a java hbase client app which does spark-submit . I figured out there are 2 ways to get it working: # Command line by passing --conf "spark.yarn.keytab=" --conf "spark.yarn.principal=" in spark-submit command. # In java app {code:java} Set principal and keytab to the values needed for authentication from a given user SparkSession spark = SparkSession.builder().appName("SparkHBaseApp") .config("spark.yarn.principal", principal) .config("spark.yarn.keytab", keytab) .getOrCreate(); {code} NOTE: When I tried to pass --keytab and --principal flag in spark-submit command, which also threw error "Caused by: java.io.IOException: java.lang.RuntimeException: Found no valid authentication method from options" was (Author: JIRAUSER298527): Hi, I have faced an issue when I wrote a java hbase client app which does spark-submit . I figured out there are 2 ways to get it working: # Command line by passing --conf "spark.yarn.keytab=" --conf "spark.yarn.principal=" in spark-submit command. # In java app {code:java} Set principal and keytab to the values needed for authentication from a given user SparkSession spark = SparkSession.builder().appName("SparkHBaseApp") .config("spark.yarn.principal", principal) .config("spark.yarn.keytab", keytab) .getOrCreate(); {code} NOTE: When I tried to pass --keytab and --principal flag in spark-submit command, which also threw error "Caused by: java.io.IOException: java.lang.RuntimeException: Found no valid authentication method from options" > HBase Spark does not work in Kerberos and yarn-master mode > -- > > Key: HBASE-17040 > URL: https://issues.apache.org/jira/browse/HBASE-17040 > Project: HBase > Issue Type: Bug > Components: spark >Affects Versions: 2.0.0 > Environment: HBase > Kerberos > Yarn > Cloudera >Reporter: Binzi Cao >Priority: Critical > > We are loading hbase records to RDD with the hbase-spark library in > Cloudera. > The hbase-spark code works if we submit the job with client mode, but does > not work in cluster mode. We got below exceptions: > ``` > 16/11/07 05:43:28 WARN security.UserGroupInformation: > PriviledgedActionException as:spark (auth:SIMPLE) > cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > 16/11/07 05:43:28 WARN ipc.RpcClientImpl: Exception encountered while > connecting to the server : javax.security.sasl.SaslException: GSS initiate > failed [Caused by GSSException: No valid credentials provided (Mechanism > level: Failed to find any Kerberos tgt)] > 16/11/07 05:43:28 ERROR ipc.RpcClientImpl: SASL authentication failed. The > most likely cause is missing or invalid credentials. Consider 'kinit'. > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331) > at >
[jira] [Commented] (HBASE-17040) HBase Spark does not work in Kerberos and yarn-master mode
[ https://issues.apache.org/jira/browse/HBASE-17040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839654#comment-17839654 ] Nikita Pande commented on HBASE-17040: -- Hi, I have faced an issue when I wrote a java hbase client app which does spark-submit . I figured out there are 2 ways to get it working: # Command line by passing --conf "spark.yarn.keytab=" --conf "spark.yarn.principal=" in spark-submit command. # In java app {code:java} Set principal and keytab to the values needed for authentication from a given user SparkSession spark = SparkSession.builder().appName("SparkHBaseApp") .config("spark.yarn.principal", principal) .config("spark.yarn.keytab", keytab) .getOrCreate(); {code} NOTE: When I tried to pass --keytab and --principal flag in spark-submit command, which also threw error "Caused by: java.io.IOException: java.lang.RuntimeException: Found no valid authentication method from options" > HBase Spark does not work in Kerberos and yarn-master mode > -- > > Key: HBASE-17040 > URL: https://issues.apache.org/jira/browse/HBASE-17040 > Project: HBase > Issue Type: Bug > Components: spark >Affects Versions: 2.0.0 > Environment: HBase > Kerberos > Yarn > Cloudera >Reporter: Binzi Cao >Priority: Critical > > We are loading hbase records to RDD with the hbase-spark library in > Cloudera. > The hbase-spark code works if we submit the job with client mode, but does > not work in cluster mode. We got below exceptions: > ``` > 16/11/07 05:43:28 WARN security.UserGroupInformation: > PriviledgedActionException as:spark (auth:SIMPLE) > cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > 16/11/07 05:43:28 WARN ipc.RpcClientImpl: Exception encountered while > connecting to the server : javax.security.sasl.SaslException: GSS initiate > failed [Caused by GSSException: No valid credentials provided (Mechanism > level: Failed to find any Kerberos tgt)] > 16/11/07 05:43:28 ERROR ipc.RpcClientImpl: SASL authentication failed. The > most likely cause is missing or invalid credentials. Consider 'kinit'. > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:181) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226) > at > org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:34118) > at > org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1627) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:92) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:89) > at > org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) > at > org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95) > at > org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) > at >
[jira] [Updated] (HBASE-28543) Multiple issues preventing starting org.apache.hadoop.hbase.rest.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-28543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28543: Summary: Multiple issues preventing starting org.apache.hadoop.hbase.rest.PerformanceEvaluation (was: org.apache.hadoop.hbase.rest.PerformanceEvaluation does not read hbase-site.xml) > Multiple issues preventing starting > org.apache.hadoop.hbase.rest.PerformanceEvaluation > -- > > Key: HBASE-28543 > URL: https://issues.apache.org/jira/browse/HBASE-28543 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I am trying to run org.apache.hadoop.hbase.rest.PerformanceEvaluation. > It cannot connect to the ZK quorum specified in hbase-site.xml. > It implements the Configurable interface incorrectly. > Fixing the Configurable implementation results in connecing to ZK properly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28543) org.apache.hadoop.hbase.rest.PerformanceEvaluation does not read hbase-site.xml
Istvan Toth created HBASE-28543: --- Summary: org.apache.hadoop.hbase.rest.PerformanceEvaluation does not read hbase-site.xml Key: HBASE-28543 URL: https://issues.apache.org/jira/browse/HBASE-28543 Project: HBase Issue Type: Bug Components: REST Reporter: Istvan Toth Assignee: Istvan Toth I am trying to run org.apache.hadoop.hbase.rest.PerformanceEvaluation. It cannot connect to the ZK quorum specified in hbase-site.xml. It implements the Configurable interface incorrectly. Fixing the Configurable implementation results in connecing to ZK properly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28543) Multiple issues preventing starting org.apache.hadoop.hbase.rest.PerformanceEvaluation
[ https://issues.apache.org/jira/browse/HBASE-28543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated HBASE-28543: Description: I am trying to run org.apache.hadoop.hbase.rest.PerformanceEvaluation. It cannot connect to the ZK quorum specified in hbase-site.xml. It implements the Configurable interface incorrectly. Fixing the Configurable implementation results in connecing to ZK properly. --host option does not work because it conflicts with --h for help was: I am trying to run org.apache.hadoop.hbase.rest.PerformanceEvaluation. It cannot connect to the ZK quorum specified in hbase-site.xml. It implements the Configurable interface incorrectly. Fixing the Configurable implementation results in connecing to ZK properly. > Multiple issues preventing starting > org.apache.hadoop.hbase.rest.PerformanceEvaluation > -- > > Key: HBASE-28543 > URL: https://issues.apache.org/jira/browse/HBASE-28543 > Project: HBase > Issue Type: Bug > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > > I am trying to run org.apache.hadoop.hbase.rest.PerformanceEvaluation. > It cannot connect to the ZK quorum specified in hbase-site.xml. > It implements the Configurable interface incorrectly. > Fixing the Configurable implementation results in connecing to ZK properly. > --host option does not work because it conflicts with --h for help -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-24813) ReplicationSource should clear buffer usage on ReplicationSourceManager upon termination
[ https://issues.apache.org/jira/browse/HBASE-24813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-24813: --- Labels: pull-request-available (was: ) > ReplicationSource should clear buffer usage on ReplicationSourceManager upon > termination > > > Key: HBASE-24813 > URL: https://issues.apache.org/jira/browse/HBASE-24813 > Project: HBase > Issue Type: Bug > Components: Replication >Affects Versions: 3.0.0-alpha-1, 2.4.0, 2.2.6, 2.3.4, 2.5.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0-alpha-1, 2.2.7, 2.5.0, 2.4.1 > > Attachments: TestReplicationSyncUpTool.log, > image-2020-10-09-10-50-00-372.png > > > Following investigations on the issue described by [~elserj] on HBASE-24779, > we found out that once a peer is removed, thus killing peers related > *ReplicationSource* instance, it may leave > *ReplicationSourceManager.totalBufferUsed* inconsistent. This can happen if > *ReplicationSourceWALReader* had put some entries on its queue to be > processed by *ReplicationSourceShipper,* but the peer removal killed the > shipper before it could process the pending entries. When > *ReplicationSourceWALReader* thread add entries to the queue, it increments > *ReplicationSourceManager.totalBufferUsed* with the sum of the entries sizes. > When those entries are read by *ReplicationSourceShipper,* > *ReplicationSourceManager.totalBufferUsed* is then decreased. We should also > decrease *ReplicationSourceManager.totalBufferUsed* when *ReplicationSource* > is terminated, otherwise those unprocessed entries size would be consuming > *ReplicationSourceManager.totalBufferUsed __*indefinitely, unless the RS gets > restarted. This may be a problem for deployments with multiple peers, or if > new peers are added.** -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-24813 ReplicationSource should clear buffer usage on Replicatio… [hbase]
1458451310 commented on code in PR #2546: URL: https://github.com/apache/hbase/pull/2546#discussion_r1574592005 ## hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceShipper.java: ## @@ -325,4 +327,53 @@ void stopWorker() { public boolean isFinished() { return state == WorkerState.FINISHED; } + + /** + * Attempts to properly update ReplicationSourceManager.totalBufferUser, + * in case there were unprocessed entries batched by the reader to the shipper, + * but the shipper didn't manage to ship those because the replication source is being terminated. + * In that case, it iterates through the batched entries and decrease the pending + * entries size from ReplicationSourceManager.totalBufferUser + * + * NOTES + * 1) This method should only be called upon replication source termination. + * It blocks waiting for both shipper and reader threads termination, + * to make sure no race conditions + * when updating ReplicationSourceManager.totalBufferUser. + * + * 2) It does not attempt to terminate reader and shipper threads. Those must + * have been triggered interruption/termination prior to calling this method. + */ + void clearWALEntryBatch() { +long timeout = System.currentTimeMillis() + this.shipEditsTimeout; +while(this.isAlive() || this.entryReader.isAlive()){ + try { +if (System.currentTimeMillis() >= timeout) { + LOG.warn("Interrupting source thread for peer {} without cleaning buffer usage " ++ "because clearWALEntryBatch method timed out whilst waiting reader/shipper " ++ "thread to stop.", this.source.getPeerId()); + Thread.currentThread().interrupt(); Review Comment: if return, then we do not clean the batch, so replication quota will be leaked. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
jhungund commented on code in PR #5829: URL: https://github.com/apache/hbase/pull/5829#discussion_r1574577429 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java: ## @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.OptionalLong; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The DataTieringManager class categorizes data into hot data and cold data based on the specified + * {@link DataTieringType} when DataTiering is enabled. DataTiering is disabled by default with + * {@link DataTieringType} set to {@link DataTieringType#NONE}. The {@link DataTieringType} + * determines the logic for distinguishing data into hot or cold. By default, all data is considered + * as hot. + */ +@InterfaceAudience.Private +public class DataTieringManager { + private static final Logger LOG = LoggerFactory.getLogger(DataTieringManager.class); + public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type"; + public static final String DATATIERING_HOT_DATA_AGE_KEY = +"hbase.hstore.datatiering.hot.age.millis"; + public static final DataTieringType DEFAULT_DATATIERING = DataTieringType.NONE; + public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 * 1000; // 7 Days + private static DataTieringManager instance; + private final Map onlineRegions; + + private DataTieringManager(Map onlineRegions) { +this.onlineRegions = onlineRegions; + } + + /** + * Initializes the DataTieringManager instance with the provided map of online regions. + * @param onlineRegions A map containing online regions. + */ + public static synchronized void instantiate(Map onlineRegions) { +if (instance == null) { + instance = new DataTieringManager(onlineRegions); + LOG.info("DataTieringManager instantiated successfully."); +} else { + LOG.warn("DataTieringManager is already instantiated."); +} + } + + /** + * Retrieves the instance of DataTieringManager. + * @return The instance of DataTieringManager. + * @throws IllegalStateException if DataTieringManager has not been instantiated. + */ + public static synchronized DataTieringManager getInstance() { +if (instance == null) { + throw new IllegalStateException( +"DataTieringManager has not been instantiated. Call instantiate() first."); +} +return instance; + } + + /** + * Determines whether data tiering is enabled for the given block cache key. + * @param key the block cache key + * @return {@code true} if data tiering is enabled for the HFile associated with the key, + * {@code false} otherwise + * @throws DataTieringException if there is an error retrieving the HFile path or configuration + */ + public boolean isDataTieringEnabled(BlockCacheKey key) throws DataTieringException { +Path hFilePath = key.getFilePath(); +if (hFilePath == null) { + throw new DataTieringException("BlockCacheKey Doesn't Contain HFile Path"); +} +return isDataTieringEnabled(hFilePath); + } + + /** + * Determines whether data tiering is enabled for the given HFile path. + * @param hFilePath the path to the HFile + * @return {@code true} if data tiering is enabled, {@code false} otherwise + * @throws DataTieringException if there is an error retrieving the configuration + */ + public boolean isDataTieringEnabled(Path hFilePath) throws DataTieringException { +Configuration configuration = getConfiguration(hFilePath); +DataTieringType dataTieringType =
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
ndimiduk commented on code in PR #5770: URL: https://github.com/apache/hbase/pull/5770#discussion_r1574542917 ## hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionRegistryFactory.java: ## @@ -17,27 +17,75 @@ */ package org.apache.hadoop.hbase.client; -import static org.apache.hadoop.hbase.HConstants.CLIENT_CONNECTION_REGISTRY_IMPL_CONF_KEY; - +import java.io.IOException; +import java.net.URI; +import java.util.ServiceLoader; +import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HConstants; import org.apache.hadoop.hbase.security.User; import org.apache.hadoop.hbase.util.ReflectionUtils; import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hbase.thirdparty.com.google.common.collect.ImmutableMap; /** - * Factory class to get the instance of configured connection registry. + * The entry point for creating a {@link ConnectionRegistry}. */ @InterfaceAudience.Private final class ConnectionRegistryFactory { + private static final Logger LOG = LoggerFactory.getLogger(ConnectionRegistryFactory.class); + + private static final ImmutableMap CREATORS; + static { +ImmutableMap.Builder builder = ImmutableMap.builder(); +for (ConnectionRegistryCreator factory : ServiceLoader.load(ConnectionRegistryCreator.class)) { + builder.put(factory.protocol(), factory); +} +CREATORS = builder.build(); + } + private ConnectionRegistryFactory() { } - /** Returns The connection registry implementation to use. */ - static ConnectionRegistry getRegistry(Configuration conf, User user) { + /** + * Returns the connection registry implementation to use, for the given connection url + * {@code uri}. + * + * We use {@link ServiceLoader} to load different implementations, and use the scheme of the given + * {@code uri} to select. And if there is no protocol specified, or we can not find a + * {@link ConnectionRegistryCreator} implementation for the given scheme, we will fallback to use + * the old way to create the {@link ConnectionRegistry}. Notice that, if fallback happens, the + * specified connection url {@code uri} will not take effect, we will load all the related + * configurations from the given Configuration instance {@code conf} + */ + static ConnectionRegistry create(URI uri, Configuration conf, User user) throws IOException { +if (StringUtils.isBlank(uri.getScheme())) { + LOG.warn("No scheme specified for {}, fallback to use old way", uri); Review Comment: Understood. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
wchevreuil commented on PR #5829: URL: https://github.com/apache/hbase/pull/5829#issuecomment-2069053516 Please rebase your local branch with the current state of remote HBASE-28463 then force push your changes to resolve the conflicts. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28468: Integrate the data-tiering logic into cache evictions. [hbase]
wchevreuil commented on code in PR #5829: URL: https://github.com/apache/hbase/pull/5829#discussion_r1574522829 ## hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DataTieringManager.java: ## @@ -0,0 +1,265 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.regionserver; + +import java.util.ArrayList; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.OptionalLong; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.io.hfile.BlockCacheKey; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.EnvironmentEdgeManager; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * The DataTieringManager class categorizes data into hot data and cold data based on the specified + * {@link DataTieringType} when DataTiering is enabled. DataTiering is disabled by default with + * {@link DataTieringType} set to {@link DataTieringType#NONE}. The {@link DataTieringType} + * determines the logic for distinguishing data into hot or cold. By default, all data is considered + * as hot. + */ +@InterfaceAudience.Private +public class DataTieringManager { + private static final Logger LOG = LoggerFactory.getLogger(DataTieringManager.class); + public static final String DATATIERING_KEY = "hbase.hstore.datatiering.type"; + public static final String DATATIERING_HOT_DATA_AGE_KEY = +"hbase.hstore.datatiering.hot.age.millis"; + public static final DataTieringType DEFAULT_DATATIERING = DataTieringType.NONE; + public static final long DEFAULT_DATATIERING_HOT_DATA_AGE = 7 * 24 * 60 * 60 * 1000; // 7 Days + private static DataTieringManager instance; + private final Map onlineRegions; + + private DataTieringManager(Map onlineRegions) { +this.onlineRegions = onlineRegions; + } + + /** + * Initializes the DataTieringManager instance with the provided map of online regions. + * @param onlineRegions A map containing online regions. + */ + public static synchronized void instantiate(Map onlineRegions) { +if (instance == null) { + instance = new DataTieringManager(onlineRegions); + LOG.info("DataTieringManager instantiated successfully."); +} else { + LOG.warn("DataTieringManager is already instantiated."); +} + } + + /** + * Retrieves the instance of DataTieringManager. + * @return The instance of DataTieringManager. + * @throws IllegalStateException if DataTieringManager has not been instantiated. + */ + public static synchronized DataTieringManager getInstance() { +if (instance == null) { + throw new IllegalStateException( +"DataTieringManager has not been instantiated. Call instantiate() first."); +} +return instance; + } + + /** + * Determines whether data tiering is enabled for the given block cache key. + * @param key the block cache key + * @return {@code true} if data tiering is enabled for the HFile associated with the key, + * {@code false} otherwise + * @throws DataTieringException if there is an error retrieving the HFile path or configuration + */ + public boolean isDataTieringEnabled(BlockCacheKey key) throws DataTieringException { +Path hFilePath = key.getFilePath(); +if (hFilePath == null) { + throw new DataTieringException("BlockCacheKey Doesn't Contain HFile Path"); +} +return isDataTieringEnabled(hFilePath); + } + + /** + * Determines whether data tiering is enabled for the given HFile path. + * @param hFilePath the path to the HFile + * @return {@code true} if data tiering is enabled, {@code false} otherwise + * @throws DataTieringException if there is an error retrieving the configuration + */ + public boolean isDataTieringEnabled(Path hFilePath) throws DataTieringException { +Configuration configuration = getConfiguration(hFilePath); +DataTieringType dataTieringType =
[jira] [Created] (HBASE-28542) Refactoring Data Tiering Management for Improved Extensibility and Maintainability
Vinayak Hegde created HBASE-28542: - Summary: Refactoring Data Tiering Management for Improved Extensibility and Maintainability Key: HBASE-28542 URL: https://issues.apache.org/jira/browse/HBASE-28542 Project: HBase Issue Type: Task Components: BucketCache Reporter: Vinayak Hegde Assignee: Vinayak Hegde This Jira focuses on refactoring the Data Tiering Management module to enhance modularity and remove the Singleton pattern. The objective is to restructure the codebase for better separation of concerns and increased flexibility. This includes migrating away from the Singleton pattern in favor of a more modular approach, enabling easier integration of new data tiering types and handlers. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HBASE-28466) Integration of time-based priority logic of bucket cache in prefetch functionality of HBase.
[ https://issues.apache.org/jira/browse/HBASE-28466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil resolved HBASE-28466. -- Resolution: Fixed Merged into feature branch. Thanks for the contribution, [~vinayakhegde] ! > Integration of time-based priority logic of bucket cache in prefetch > functionality of HBase. > > > Key: HBASE-28466 > URL: https://issues.apache.org/jira/browse/HBASE-28466 > Project: HBase > Issue Type: Task > Components: BucketCache >Reporter: Janardhan Hungund >Assignee: Vinayak Hegde >Priority: Major > Labels: pull-request-available > > This Jira tracks the integration of the framework of APIs (implemented in > HBASE-28465) related to data tiering into prefetch logic of HBase. The > implementation should filter out the cold data and enable the prefetching of > hot data into bucket cache. > Thanks, > Janardhan > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28466 Integration of time-based priority logic of bucket cache in prefetch functionality of HBase [hbase]
wchevreuil merged PR #5808: URL: https://github.com/apache/hbase/pull/5808 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068891455 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 38s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 55s | master passed | | +1 :green_heart: | compile | 0m 20s | master passed | | +1 :green_heart: | shadedjars | 5m 18s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 55s | the patch passed | | +1 :green_heart: | compile | 0m 20s | the patch passed | | +1 :green_heart: | javac | 0m 20s | the patch passed | | +1 :green_heart: | shadedjars | 5m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 3m 55s | hbase-rest in the patch passed. | | | | 23m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f3671634a86f 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Eclipse Adoptium-17.0.10+7 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/testReport/ | | Max. process+thread count | 1836 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068891637 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 37s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 4s | master passed | | +1 :green_heart: | compile | 0m 20s | master passed | | +1 :green_heart: | shadedjars | 5m 17s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 50s | the patch passed | | +1 :green_heart: | compile | 0m 20s | the patch passed | | +1 :green_heart: | javac | 0m 20s | the patch passed | | +1 :green_heart: | shadedjars | 5m 19s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 3m 55s | hbase-rest in the patch passed. | | | | 23m 18s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux cb0bf97ae4da 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/testReport/ | | Max. process+thread count | 1660 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068890749 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 24s | master passed | | +1 :green_heart: | compile | 0m 15s | master passed | | +1 :green_heart: | shadedjars | 5m 45s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 13s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 35s | the patch passed | | +1 :green_heart: | compile | 0m 15s | the patch passed | | +1 :green_heart: | javac | 0m 16s | the patch passed | | +1 :green_heart: | shadedjars | 5m 40s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 12s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 4m 8s | hbase-rest in the patch passed. | | | | 22m 54s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 7850ea941197 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/testReport/ | | Max. process+thread count | 1640 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-206922 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 43s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 53s | master passed | | +1 :green_heart: | compile | 0m 27s | master passed | | +1 :green_heart: | checkstyle | 0m 11s | master passed | | +1 :green_heart: | spotless | 0m 41s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 0m 28s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 44s | the patch passed | | +1 :green_heart: | compile | 0m 28s | the patch passed | | -0 :warning: | javac | 0m 28s | hbase-rest generated 1 new + 162 unchanged - 1 fixed = 163 total (was 163) | | +1 :green_heart: | checkstyle | 0m 11s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 4m 53s | Patch does not cause any errors with Hadoop 3.3.6. | | +1 :green_heart: | spotless | 1m 1s | patch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 0m 49s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 10s | The patch does not generate ASF License warnings. | | | | 22m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux bc282cdef8b7 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Eclipse Adoptium-11.0.17+8 | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/artifact/yetus-general-check/output/diff-compile-javac-hbase-rest.txt | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/2/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
Apache9 commented on PR #5770: URL: https://github.com/apache/hbase/pull/5770#issuecomment-2068819984 > I was referring to the email thread on https://lists.apache.org/thread/ksw4tb8h22ojwmbn7pqwc7gox70vgzgr > > In my reading the conclusion was that we should not remove the ZK connection configuration path for the Connection object, only deprecate it in 3.0. > > This does not directly affect this ticket, I was only reflecting on your comment about making ZK connections internal only. We are on the same page here. This PR still support specifying `hbase+zk` scheme right? So we are not removing zookeeper support. We just want to have a way to specify a cluster without zookeeper, please see HBASE-28425. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068800075 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 1s | master passed | | +1 :green_heart: | compile | 0m 20s | master passed | | +1 :green_heart: | shadedjars | 5m 20s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 49s | the patch passed | | +1 :green_heart: | compile | 0m 19s | the patch passed | | +1 :green_heart: | javac | 0m 19s | the patch passed | | +1 :green_heart: | shadedjars | 5m 15s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 3m 58s | hbase-rest in the patch passed. | | | | 23m 22s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 8ed0e7027aee 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/testReport/ | | Max. process+thread count | 1636 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068799663 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 27s | Docker mode activated. | | -0 :warning: | yetus | 0m 2s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 43s | master passed | | +1 :green_heart: | compile | 0m 16s | master passed | | +1 :green_heart: | shadedjars | 5m 42s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 14s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 27s | the patch passed | | +1 :green_heart: | compile | 0m 15s | the patch passed | | +1 :green_heart: | javac | 0m 15s | the patch passed | | +1 :green_heart: | shadedjars | 5m 40s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 12s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 4m 10s | hbase-rest in the patch passed. | | | | 23m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 847c68040d02 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/testReport/ | | Max. process+thread count | 1635 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068800519 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 4s | master passed | | +1 :green_heart: | compile | 0m 21s | master passed | | +1 :green_heart: | shadedjars | 5m 21s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 17s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 54s | the patch passed | | +1 :green_heart: | compile | 0m 20s | the patch passed | | +1 :green_heart: | javac | 0m 20s | the patch passed | | +1 :green_heart: | shadedjars | 5m 22s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 14s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 3m 52s | hbase-rest in the patch passed. | | | | 23m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-jdk17-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux f4e244eb7c9d 5.4.0-172-generic #190-Ubuntu SMP Fri Feb 2 23:24:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Eclipse Adoptium-17.0.10+7 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/testReport/ | | Max. process+thread count | 1937 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
Apache-HBase commented on PR #5846: URL: https://github.com/apache/hbase/pull/5846#issuecomment-2068797171 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 45s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 1s | master passed | | +1 :green_heart: | compile | 0m 30s | master passed | | +1 :green_heart: | checkstyle | 0m 12s | master passed | | +1 :green_heart: | spotless | 0m 44s | branch has no errors when running spotless:check. | | +1 :green_heart: | spotbugs | 0m 32s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 2m 43s | the patch passed | | +1 :green_heart: | compile | 0m 27s | the patch passed | | -0 :warning: | javac | 0m 27s | hbase-rest generated 1 new + 162 unchanged - 1 fixed = 163 total (was 163) | | +1 :green_heart: | checkstyle | 0m 10s | the patch passed | | -0 :warning: | whitespace | 0m 0s | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | hadoopcheck | 4m 56s | Patch does not cause any errors with Hadoop 3.3.6. | | -1 :x: | spotless | 0m 41s | patch has 35 errors when running spotless:check, run spotless:apply to fix. | | +1 :green_heart: | spotbugs | 0m 37s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 12s | The patch does not generate ASF License warnings. | | | | 21m 49s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5846 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti spotless checkstyle compile | | uname | Linux c52351a1475a 5.4.0-174-generic #193-Ubuntu SMP Thu Mar 7 14:29:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 5a404c4950 | | Default Java | Eclipse Adoptium-11.0.17+8 | | javac | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-rest.txt | | whitespace | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-general-check/output/whitespace-eol.txt | | spotless | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/artifact/yetus-general-check/output/patch-spotless.txt | | Max. process+thread count | 79 (vs. ulimit of 3) | | modules | C: hbase-rest U: hbase-rest | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5846/1/console | | versions | git=2.34.1 maven=3.8.6 spotbugs=4.7.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28512 Update error prone to 2.26.1 [hbase]
Apache-HBase commented on PR #5844: URL: https://github.com/apache/hbase/pull/5844#issuecomment-2068775485 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 42s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 2m 38s | branch-2 passed | | +1 :green_heart: | compile | 1m 35s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 12s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 21s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 13s | the patch passed | | +1 :green_heart: | compile | 1m 31s | the patch passed | | +1 :green_heart: | javac | 1m 31s | the patch passed | | +1 :green_heart: | shadedjars | 5m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 399m 50s | root in the patch passed. | | | | 429m 55s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5844/2/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5844 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ceb192424870 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 16baa8aa3d | | Default Java | Temurin-1.8.0_352-b08 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5844/2/testReport/ | | Max. process+thread count | 7952 (vs. ulimit of 3) | | modules | C: hbase-http hbase-server hbase-thrift . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5844/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-28541) RegionServer abort when max sequence id wrong
chaijunjie created HBASE-28541: -- Summary: RegionServer abort when max sequence id wrong Key: HBASE-28541 URL: https://issues.apache.org/jira/browse/HBASE-28541 Project: HBase Issue Type: Bug Affects Versions: 2.2.3 Reporter: chaijunjie When we disable a table, some region close failed because max sequence id is less than the old max sequence id, then RS abort...Then the region close success on another RS.. we just use bulkload...is it releated? 2024-04-20 23:44:20,611 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Bulk-load file hdfs://hacluster/hbase/staging/cdr__boss20240420__t20amapi48eb4ce0k0t80jatvnnhpm5rohaj26a5rdsp0s7kifokcdd32ko89u7n/info/4f43f6cf569643da8ecc140668fa726e is on different filesystem than the destination store. Copying file over to destination filesystem. | org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:540) 2024-04-20 23:44:20,645 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Copied hdfs://hacluster/hbase/staging/cdr__boss20240420__t20amapi48eb4ce0k0t80jatvnnhpm5rohaj26a5rdsp0s7kifokcdd32ko89u7n/info/4f43f6cf569643da8ecc140668fa726e to temporary path on destination filesystem: viewfs://ClusterX/hbase/data/default/boss20240420/dc30be37cbd0ddee2f49712c55fd36cd/.tmp/6f7e97f2e3994881a6e04b36759f495e | org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:544) 2024-04-20 23:44:20,648 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Loaded HFile viewfs://ClusterX/hbase/data/default/boss20240420/dc30be37cbd0ddee2f49712c55fd36cd/.tmp/6f7e97f2e3994881a6e04b36759f495e into dc30be37cbd0ddee2f49712c55fd36cd/info as viewfs://ClusterX/hbase/data/default/boss20240420/dc30be37cbd0ddee2f49712c55fd36cd/info/3946dc15ea1c49b582a2ed175fe12ec3_SeqId_26_ - updating store file list. | org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:908) 2024-04-20 23:44:20,664 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Loaded HFile viewfs://ClusterX/hbase/data/default/boss20240420/dc30be37cbd0ddee2f49712c55fd36cd/info/3946dc15ea1c49b582a2ed175fe12ec3_SeqId_26_ into dc30be37cbd0ddee2f49712c55fd36cd/info | org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:942) 2024-04-20 23:44:20,664 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Successfully loaded viewfs://ClusterX/hbase/data/default/boss20240420/dc30be37cbd0ddee2f49712c55fd36cd/.tmp/6f7e97f2e3994881a6e04b36759f495e into dc30be37cbd0ddee2f49712c55fd36cd/info (new location: viewfs://ClusterX/hbase/data/default/boss20240420/dc30be37cbd0ddee2f49712c55fd36cd/info/3946dc15ea1c49b582a2ed175fe12ec3_SeqId_26_) | org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:914) 2024-04-20 23:44:23,592 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Validating hfile at viewfs://ClusterX/tenant/hw_cdr/dsttab/load_20240420232000/boss20240420/info/a97bb2bfcb5244279bcedbe20ec8322b for inclusion in b3d102e51ca3b78a1078580ed8a002de/info | org.apache.hadoop.hbase.regionserver.HStore.assertBulkLoadHFileOk(HStore.java:817) 2024-04-20 23:44:23,669 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Bulk-load file hdfs://hacluster/hbase/staging/cdr__boss20240420__t20amapi48eb4ce0k0t80jatvnnhpm5rohaj26a5rdsp0s7kifokcdd32ko89u7n/info/a97bb2bfcb5244279bcedbe20ec8322b is on different filesystem than the destination store. Copying file over to destination filesystem. | org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:540) 2024-04-20 23:44:23,743 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Copied hdfs://hacluster/hbase/staging/cdr__boss20240420__t20amapi48eb4ce0k0t80jatvnnhpm5rohaj26a5rdsp0s7kifokcdd32ko89u7n/info/a97bb2bfcb5244279bcedbe20ec8322b to temporary path on destination filesystem: viewfs://ClusterX/hbase/data/default/boss20240420/b3d102e51ca3b78a1078580ed8a002de/.tmp/f40ec16d10ab4474ace7af41d9126aa0 | org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:544) 2024-04-20 23:44:23,747 | INFO | RpcServer.default.FPBQ.Fifo.handler=483,queue=33,port=21302 | Loaded HFile viewfs://ClusterX/hbase/data/default/boss20240420/b3d102e51ca3b78a1078580ed8a002de/.tmp/f40ec16d10ab4474ace7af41d9126aa0 into b3d102e51ca3b78a1078580ed8a002de/info as viewfs://ClusterX/hbase/data/default/boss20240420/b3d102e51ca3b78a1078580ed8a002de/info/7cfcf858a6b34bd1b9d47ccce5dde9e4_SeqId_41_ - updating store file list. | org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:908) 2024-04-20 23:44:23,760 | INFO |
[jira] [Updated] (HBASE-28540) Cache Results in org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner
[ https://issues.apache.org/jira/browse/HBASE-28540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HBASE-28540: --- Labels: pull-request-available (was: ) > Cache Results in org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner > - > > Key: HBASE-28540 > URL: https://issues.apache.org/jira/browse/HBASE-28540 > Project: HBase > Issue Type: Improvement > Components: REST >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Minor > Labels: pull-request-available > > The implementation of org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner > is very inefficient, as the standard next() methods makes separate a http > request for each row. > Performance can be improved by not specifying the row count in the REST call > and caching the returned Results. > Chunk size can still be influenced by scan.setBatch(); -- This message was sent by Atlassian Jira (v8.20.10#820010)
[PR] HBASE-28540 Cache Results in org.apache.hadoop.hbase.rest.client.Remo… [hbase]
stoty opened a new pull request, #5846: URL: https://github.com/apache/hbase/pull/5846 …teHTable.Scanner -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-28540) Cache Results in org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner
Istvan Toth created HBASE-28540: --- Summary: Cache Results in org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner Key: HBASE-28540 URL: https://issues.apache.org/jira/browse/HBASE-28540 Project: HBase Issue Type: Improvement Components: REST Reporter: Istvan Toth Assignee: Istvan Toth The implementation of org.apache.hadoop.hbase.rest.client.RemoteHTable.Scanner is very inefficient, as the standard next() methods makes separate a http request for each row. Performance can be improved by not specifying the row count in the REST call and caching the returned Results. Chunk size can still be influenced by scan.setBatch(); -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28539) Merge of incremental backups fails if backups are on a separate FileSystem
[ https://issues.apache.org/jira/browse/HBASE-28539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benny Colyn updated HBASE-28539: Attachment: HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_.patch > Merge of incremental backups fails if backups are on a separate FileSystem > -- > > Key: HBASE-28539 > URL: https://issues.apache.org/jira/browse/HBASE-28539 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 4.0.0-alpha-1 >Reporter: Benny Colyn >Priority: Major > Attachments: > HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_.patch > > > When the backups are stored on a location that is not the > DistributedFilesystem underpinning HBase itself merging of incremental > backups fails. Detected with backups stored on S3A, but can be reproduced > with any other (like LocalFilesystem). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28539) Merge of incremental backups fails if backups are on a separate FileSystem
[ https://issues.apache.org/jira/browse/HBASE-28539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benny Colyn updated HBASE-28539: Description: When the backups are stored on a location that is not the DistributedFilesystem underpinning HBase itself merging of incremental backups fails. Detected with backups stored on S3A, but can be reproduced with any other (like LocalFilesystem). Attached is a patch with a proposed fix and a unit test that reproduces the issue. was:When the backups are stored on a location that is not the DistributedFilesystem underpinning HBase itself merging of incremental backups fails. Detected with backups stored on S3A, but can be reproduced with any other (like LocalFilesystem). > Merge of incremental backups fails if backups are on a separate FileSystem > -- > > Key: HBASE-28539 > URL: https://issues.apache.org/jira/browse/HBASE-28539 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 4.0.0-alpha-1 >Reporter: Benny Colyn >Priority: Major > Attachments: > HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_.patch > > > When the backups are stored on a location that is not the > DistributedFilesystem underpinning HBase itself merging of incremental > backups fails. Detected with backups stored on S3A, but can be reproduced > with any other (like LocalFilesystem). > Attached is a patch with a proposed fix and a unit test that reproduces the > issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28539) Merge of incremental backups fails if backups are on a separate FileSystem
[ https://issues.apache.org/jira/browse/HBASE-28539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benny Colyn updated HBASE-28539: Attachment: (was: HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_.patch) > Merge of incremental backups fails if backups are on a separate FileSystem > -- > > Key: HBASE-28539 > URL: https://issues.apache.org/jira/browse/HBASE-28539 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 4.0.0-alpha-1 >Reporter: Benny Colyn >Priority: Major > Attachments: > HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_-1.patch > > > When the backups are stored on a location that is not the > DistributedFilesystem underpinning HBase itself merging of incremental > backups fails. Detected with backups stored on S3A, but can be reproduced > with any other (like LocalFilesystem). > Attached is a patch with a proposed fix and a unit test that reproduces the > issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28539) Merge of incremental backups fails if backups are on a separate FileSystem
[ https://issues.apache.org/jira/browse/HBASE-28539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benny Colyn updated HBASE-28539: Attachment: HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_-1.patch Status: Patch Available (was: Open) > Merge of incremental backups fails if backups are on a separate FileSystem > -- > > Key: HBASE-28539 > URL: https://issues.apache.org/jira/browse/HBASE-28539 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 4.0.0-alpha-1 >Reporter: Benny Colyn >Priority: Major > Attachments: > HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_-1.patch > > > When the backups are stored on a location that is not the > DistributedFilesystem underpinning HBase itself merging of incremental > backups fails. Detected with backups stored on S3A, but can be reproduced > with any other (like LocalFilesystem). > Attached is a patch with a proposed fix and a unit test that reproduces the > issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HBASE-28539) Merge of incremental backups fails if backups are on a separate FileSystem
[ https://issues.apache.org/jira/browse/HBASE-28539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benny Colyn updated HBASE-28539: Flags: Patch > Merge of incremental backups fails if backups are on a separate FileSystem > -- > > Key: HBASE-28539 > URL: https://issues.apache.org/jira/browse/HBASE-28539 > Project: HBase > Issue Type: Bug >Affects Versions: 2.6.0, 4.0.0-alpha-1 >Reporter: Benny Colyn >Priority: Major > Attachments: > HBASE-28539_Fix_merging_of_incremental_backups_when_the_backup_filesystem_is_not_the_same_.patch > > > When the backups are stored on a location that is not the > DistributedFilesystem underpinning HBase itself merging of incremental > backups fails. Detected with backups stored on S3A, but can be reproduced > with any other (like LocalFilesystem). > Attached is a patch with a proposed fix and a unit test that reproduces the > issue. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HBASE-28539) Merge of incremental backups fails if backups are on a separate FileSystem
Benny Colyn created HBASE-28539: --- Summary: Merge of incremental backups fails if backups are on a separate FileSystem Key: HBASE-28539 URL: https://issues.apache.org/jira/browse/HBASE-28539 Project: HBase Issue Type: Bug Affects Versions: 2.6.0, 4.0.0-alpha-1 Reporter: Benny Colyn When the backups are stored on a location that is not the DistributedFilesystem underpinning HBase itself merging of incremental backups fails. Detected with backups stored on S3A, but can be reproduced with any other (like LocalFilesystem). -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
stoty commented on PR #5770: URL: https://github.com/apache/hbase/pull/5770#issuecomment-2068629529 I was referring to the email thread on https://lists.apache.org/thread/ksw4tb8h22ojwmbn7pqwc7gox70vgzgr In my reading the conclusion was that we should not remove the ZK connection configuration path for the Connection object, only deprecate it in 3.0. This does not directly affect this ticket, I was only reflecting on your comment about making ZK connections internal only. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28512 Update error prone to 2.26.1 [hbase]
Apache-HBase commented on PR #5844: URL: https://github.com/apache/hbase/pull/5844#issuecomment-2068575929 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 6s | branch-2 passed | | +1 :green_heart: | compile | 1m 55s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 47s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 54s | branch-2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 47s | the patch passed | | +1 :green_heart: | compile | 1m 55s | the patch passed | | +1 :green_heart: | javac | 1m 55s | the patch passed | | +1 :green_heart: | shadedjars | 5m 47s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 2m 45s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 277m 10s | root in the patch passed. | | | | 312m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5844/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/5844 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ddbcdf32560d 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 16baa8aa3d | | Default Java | Eclipse Adoptium-11.0.17+8 | | Test Results | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5844/2/testReport/ | | Max. process+thread count | 8516 (vs. ulimit of 3) | | modules | C: hbase-http hbase-server hbase-thrift . U: . | | Console output | https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-5844/2/console | | versions | git=2.34.1 maven=3.8.6 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28420 update the procedure's field to store for ServerRemoteProcedure [hbase]
Umeshkumar9414 commented on code in PR #5816: URL: https://github.com/apache/hbase/pull/5816#discussion_r1574183543 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerRemoteProcedure.java: ## @@ -137,6 +140,10 @@ synchronized void remoteOperationDone(MasterProcedureEnv env, Throwable error) { getProcId()); return; } +//below persistence is added so that if report goes to last active master, it throws exception +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_REPORT_SUCCEED; + env.getMasterServices().getMasterProcedureExecutor().getStore().update(this); + complete(env, error); Review Comment: I was also thinking of the scenarios, where while releasing the new bits there might be two kinds of HMaster one with new bits and one with old bits. What will happen if Active Master with new bits crashes/stops and the Master with old bits becomes Active Master and vice-versa? Do we have some guidelines while releasing that I need to keep in mind while thinking of such scenarios? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work started] (HBASE-28535) Implement a region server level configuration to enable/disable data-tiering
[ https://issues.apache.org/jira/browse/HBASE-28535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-28535 started by Janardhan Hungund. - > Implement a region server level configuration to enable/disable data-tiering > > > Key: HBASE-28535 > URL: https://issues.apache.org/jira/browse/HBASE-28535 > Project: HBase > Issue Type: Task > Components: BucketCache >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > > Provide the user with the ability to enable and disable the data tiering > feature. The time-based data tiering is applicable to a specific set of use > cases which write date based records and access to recently written data. > The feature, in general, should be avoided for use cases which are not > dependent on the date-based reads and writes as the code flows which enable > data temperature checks can induce performance regressions. > This Jira is added to track the functionality to optionally enable > region-server wide configuration to disable or enable the feature. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HBASE-28535) Implement a region server level configuration to enable/disable data-tiering
[ https://issues.apache.org/jira/browse/HBASE-28535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janardhan Hungund reassigned HBASE-28535: - Assignee: Janardhan Hungund > Implement a region server level configuration to enable/disable data-tiering > > > Key: HBASE-28535 > URL: https://issues.apache.org/jira/browse/HBASE-28535 > Project: HBase > Issue Type: Task > Components: BucketCache >Reporter: Janardhan Hungund >Assignee: Janardhan Hungund >Priority: Major > > Provide the user with the ability to enable and disable the data tiering > feature. The time-based data tiering is applicable to a specific set of use > cases which write date based records and access to recently written data. > The feature, in general, should be avoided for use cases which are not > dependent on the date-based reads and writes as the code flows which enable > data temperature checks can induce performance regressions. > This Jira is added to track the functionality to optionally enable > region-server wide configuration to disable or enable the feature. > Thanks, > Janardhan -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HBASE-28497) Missing fields in Get.toJSON
[ https://issues.apache.org/jira/browse/HBASE-28497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17839490#comment-17839490 ] Hudson commented on HBASE-28497: Results for branch branch-2.4 [build #723 on builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/723/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/723/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/723/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/723/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/723/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Missing fields in Get.toJSON > > > Key: HBASE-28497 > URL: https://issues.apache.org/jira/browse/HBASE-28497 > Project: HBase > Issue Type: Improvement > Components: Client >Reporter: Chandra Sekhar K >Assignee: Chandra Sekhar K >Priority: Major > Labels: pull-request-available > Fix For: 2.6.0, 2.4.18, 3.0.0-beta-2, 2.5.9 > > > Missing fields in Get.toJSON conversion. > |Class|Whether Mapped to JSON?|add to json?| > |Get| | | > |row|Yes| | > |maxVersions|Yes| | > |cacheBlocks|Yes| | > |storeLimit|No|Yes| > |storeOffset|No|Yes| > |tr|Yes| | > |checkExistenceOnly|No|Yes| > |familyMap|Yes| | > | | | | > |Query| | | > |filter|Yes| | > |targetReplicaId|No|Yes| > |consistency|No|Yes| > |colFamTimeRangeMap|No|Yes| > |loadColumnFamiliesOnDemand|No|Yes| > | | | | > |OperationWithAttributes| | | > |attributes|partial, only ID attribute is set|Yes| > |priority|No|Yes| -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] HBASE-28420 update the procedure's field to store for ServerRemoteProcedure [hbase]
Umeshkumar9414 commented on code in PR #5816: URL: https://github.com/apache/hbase/pull/5816#discussion_r1574170676 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerRemoteProcedure.java: ## @@ -137,6 +140,10 @@ synchronized void remoteOperationDone(MasterProcedureEnv env, Throwable error) { getProcId()); return; } +//below persistence is added so that if report goes to last active master, it throws exception +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_REPORT_SUCCEED; + env.getMasterServices().getMasterProcedureExecutor().getStore().update(this); + complete(env, error); Review Comment: First I want to understand what is your thought for not calling complete directly again - is it because of design? We want state transition in execute only? > Think of this scenario, after you persist the state, master crashes before you call complete, and after restart. If we are fine with calling complete in remoteOperationDone if we call complete before persisting the state will that be okay ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28420 update the procedure's field to store for ServerRemoteProcedure [hbase]
Umeshkumar9414 commented on code in PR #5816: URL: https://github.com/apache/hbase/pull/5816#discussion_r1574170676 ## hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ServerRemoteProcedure.java: ## @@ -137,6 +140,10 @@ synchronized void remoteOperationDone(MasterProcedureEnv env, Throwable error) { getProcId()); return; } +//below persistence is added so that if report goes to last active master, it throws exception +state = MasterProcedureProtos.ServerRemoteProcedureState.SERVER_REMOTE_PROCEDURE_REPORT_SUCCEED; + env.getMasterServices().getMasterProcedureExecutor().getStore().update(this); + complete(env, error); Review Comment: First I want to understand what is your thought for not calling complete directly again - is it because of design? We want state transition in execute only? > Think of this scenario, after you persist the state, master crashes before you call complete, and after restart, If we are fine with calling complete in remoteOperationDone if we call complete before persisting the state will that be okay ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HBASE-28436 Use connection url to specify the connection registry inf… [hbase]
Apache9 commented on PR #5770: URL: https://github.com/apache/hbase/pull/5770#issuecomment-2068557401 Ah, maybe I misguided you... I do not mean we want to completely remove zookeeper in 3.0.0 release, we just want to provide a way to hide zookeeper inside HBase, beside the zookeeper based ways. For now, there is still a place where we must expose zookeeper out, when configuring a replication peer. We should provide a way to specify a remote cluster without zookeeper, but you are still free to use zookeeper there. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org