[GitHub] [hbase] Apache-HBase commented on pull request #3430: HBASE-26029 It is not reliable to use nodeDeleted event to track regi…
Apache-HBase commented on pull request #3430: URL: https://github.com/apache/hbase/pull/3430#issuecomment-869380028 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 8m 57s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | prototool | 0m 1s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 54s | master passed | | +1 :green_heart: | compile | 5m 50s | master passed | | +1 :green_heart: | checkstyle | 1m 45s | master passed | | +1 :green_heart: | spotbugs | 7m 17s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 10s | the patch passed | | +1 :green_heart: | compile | 5m 25s | the patch passed | | +1 :green_heart: | cc | 5m 25s | the patch passed | | +1 :green_heart: | javac | 5m 25s | the patch passed | | +1 :green_heart: | checkstyle | 1m 35s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 20m 16s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | hbaseprotoc | 2m 0s | the patch passed | | +1 :green_heart: | spotbugs | 7m 8s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 79m 31s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3430 | | Optional Tests | dupname asflicense cc hbaseprotoc prototool javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux 67ea78811a38 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 22ec681ad9 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-protocol-shaded hbase-replication hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3430/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26021) HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization
[ https://issues.apache.org/jira/browse/HBASE-26021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370424#comment-17370424 ] Bharath Vissapragada commented on HBASE-26021: -- Sorry, late to the party, I was OOO this week and couldn't access jira, catching up on the latest discussion. Agree that 1.7.0 is broken and we should spin up 1.7.0.1 ([~reidchan] I can also help create the release if you are busy, let me know, there is another critical fix HBASE-25984 that I recently committed to branch-1 that is worthy of inclusion). Coming to the fix, isn't the patch incomplete ? If we just serialize HTD instead of TD (=HTD + table state), there is loss of information, right? We are just not seeing it in tests because we cache the table state in TableStateManager, so something like disable table, stop hbase, start hbase should result in an enabled table? Let's properly revert this patch instead? (We may need to add some special code to handle serialized TDs for those who are already on 1.7.0?) > HBase 1.7 to 2.4 upgrade issue due to incompatible deserialization > -- > > Key: HBASE-26021 > URL: https://issues.apache.org/jira/browse/HBASE-26021 > Project: HBase > Issue Type: Bug >Affects Versions: 1.7.0, 2.4.4 >Reporter: Viraj Jasani >Priority: Major > Attachments: Screenshot 2021-06-22 at 12.54.21 PM.png, Screenshot > 2021-06-22 at 12.54.30 PM.png > > > As of today, if we bring up HBase cluster using branch-1 and upgrade to > branch-2.4, we are facing issue while parsing namespace from HDFS fileinfo. > Instead of "*hbase:meta*" and "*hbase:namespace*", parsing using ProtobufUtil > seems to be producing "*\n hbase meta*" and "*\n hbase namespace*" > {code:java} > 2021-06-22 00:05:56,611 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > regionserver.RSRpcServices: Open hbase:meta,,1.1588230740 > 2021-06-22 00:05:56,648 INFO > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > regionserver.RSRpcServices: Open > hbase:namespace,,1624297762817.396cb6cc00cd4334cb1ea3a792d7529a. > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R namespace > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:246) > at > org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:220) > at org.apache.hadoop.hbase.TableName.(TableName.java:348) > at > org.apache.hadoop.hbase.TableName.createTableNameIfNecessary(TableName.java:385) > at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:508) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableName(ProtobufUtil.java:2292) > at > org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toTableDescriptor(ProtobufUtil.java:2937) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.parseFrom(TableDescriptorBuilder.java:1625) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder$ModifyableTableDescriptor.access$200(TableDescriptorBuilder.java:597) > at > org.apache.hadoop.hbase.client.TableDescriptorBuilder.parseFrom(TableDescriptorBuilder.java:320) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.readTableDescriptor(FSTableDescriptors.java:511) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:496) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorFromFs(FSTableDescriptors.java:482) > at > org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:210) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:2112) > at > org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:35218) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:395) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338) > at > org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) > 2021-06-22 00:05:56,759 ERROR > [RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16025] > ipc.RpcServer: Unexpected throwable object > java.lang.IllegalArgumentException: Illegal character < > > at 0. Namespaces may only contain 'alphanumeric characters' from any > > language and digits: > ^Ehbase^R^Dmeta >
[GitHub] [hbase] Apache-HBase commented on pull request #3425: HBASE-25991 Do compaction on compaction server
Apache-HBase commented on pull request #3425: URL: https://github.com/apache/hbase/pull/3425#issuecomment-869369622 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 7s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ HBASE-25714 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 14s | HBASE-25714 passed | | +1 :green_heart: | compile | 3m 25s | HBASE-25714 passed | | +1 :green_heart: | checkstyle | 1m 14s | HBASE-25714 passed | | +1 :green_heart: | spotbugs | 2m 13s | HBASE-25714 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 1s | the patch passed | | +1 :green_heart: | compile | 3m 20s | the patch passed | | +1 :green_heart: | javac | 3m 20s | the patch passed | | +1 :green_heart: | checkstyle | 1m 12s | hbase-server: The patch generated 0 new + 91 unchanged - 9 fixed = 91 total (was 100) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 19m 52s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1 3.3.0. | | +1 :green_heart: | spotbugs | 2m 26s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 13s | The patch does not generate ASF License warnings. | | | | 51m 28s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/4/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3425 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux e2094d56225e 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | HBASE-25714 / da0fa3000e | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 86 (vs. ulimit of 3) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3425/4/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3419: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFuture…
Apache-HBase commented on pull request #3419: URL: https://github.com/apache/hbase/pull/3419#issuecomment-869363561 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 25s | Docker mode activated. | | -0 :warning: | yetus | 0m 7s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 46s | branch-2 passed | | +1 :green_heart: | compile | 0m 31s | branch-2 passed | | +1 :green_heart: | shadedjars | 7m 36s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 31s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 30s | the patch passed | | +1 :green_heart: | compile | 0m 33s | the patch passed | | +1 :green_heart: | javac | 0m 33s | the patch passed | | +1 :green_heart: | shadedjars | 7m 36s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 28s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 54s | hbase-client in the patch passed. | | | | 37m 6s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3419 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux d1b9d2a1b3db 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 166becfd66 | | Default Java | AdoptOpenJDK-11.0.10+9 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/testReport/ | | Max. process+thread count | 212 (vs. ulimit of 12500) | | modules | C: hbase-client U: hbase-client | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3419: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFuture…
Apache-HBase commented on pull request #3419: URL: https://github.com/apache/hbase/pull/3419#issuecomment-869361632 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 40s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 54s | branch-2 passed | | +1 :green_heart: | compile | 1m 5s | branch-2 passed | | +1 :green_heart: | checkstyle | 0m 35s | branch-2 passed | | +1 :green_heart: | spotbugs | 1m 11s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | +1 :green_heart: | compile | 1m 4s | the patch passed | | +1 :green_heart: | javac | 1m 4s | the patch passed | | +1 :green_heart: | checkstyle | 0m 30s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 51s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 1m 20s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 33m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3419 | | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti checkstyle compile | | uname | Linux c7bf791d4966 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 166becfd66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Max. process+thread count | 96 (vs. ulimit of 12500) | | modules | C: hbase-client U: hbase-client | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/console | | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #3419: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFuture…
Apache-HBase commented on pull request #3419: URL: https://github.com/apache/hbase/pull/3419#issuecomment-869357265 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 36s | Docker mode activated. | | -0 :warning: | yetus | 0m 6s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 31s | branch-2 passed | | +1 :green_heart: | compile | 0m 24s | branch-2 passed | | +1 :green_heart: | shadedjars | 5m 56s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 24s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 21s | the patch passed | | +1 :green_heart: | compile | 0m 29s | the patch passed | | +1 :green_heart: | javac | 0m 29s | the patch passed | | +1 :green_heart: | shadedjars | 6m 9s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 25s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 2m 45s | hbase-client in the patch passed. | | | | 25m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/3419 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 22bfa111e70e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 166becfd66 | | Default Java | AdoptOpenJDK-1.8.0_282-b08 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/testReport/ | | Max. process+thread count | 346 (vs. ulimit of 12500) | | modules | C: hbase-client U: hbase-client | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3419/3/console | | versions | git=2.17.1 maven=3.6.3 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3423: HBASE-26017 fix pe tool totalRows exceed maximum of int
Apache9 commented on pull request #3423: URL: https://github.com/apache/hbase/pull/3423#issuecomment-869345547 ‘larger than’ -> 'greater than' And mind explain a bit on how do you test whether totalRows exceeds maximum of int? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] nyl3532016 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
nyl3532016 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r659425639 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionServerStorage.java ## @@ -0,0 +1,139 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; + +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.yetus.audience.InterfaceAudience; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +@InterfaceAudience.Private +/** + * since we do not maintain StoreFileManager in compaction server(can't refresh when flush). we use + * external storage(this class) to record compacting files, and initialize a new HStore every time Review comment: OK,add a link to `selectCompaction` in javadoc ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java ## @@ -19,41 +18,483 @@ package org.apache.hadoop.hbase.compactionserver; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.OptionalLong; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.RejectedExecutionHandler; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.ChoreService; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.monitoring.TaskMonitor; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker; +import org.apache.hadoop.hbase.regionserver.throttle.PressureAwareCompactionThroughputController; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.hbase.util.FSTableDescriptors; +import org.apache.hadoop.hbase.util.FutureUtils; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.StealJobQueue; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest.Builder; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionResponse; @InterfaceAudience.Private -public class CompactionThreadManager { +public class CompactionThreadManager implements ThroughputControllerService { private static Logger LOG = LoggerFactory.getLogger(CompactionThreadManager.class); + // Configuration key for the large compaction
[GitHub] [hbase] bsglz commented on a change in pull request #3419: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFuture…
bsglz commented on a change in pull request #3419: URL: https://github.com/apache/hbase/pull/3419#discussion_r659424663 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRequestFutureImpl.java ## @@ -1140,7 +1140,8 @@ private String buildDetailedErrorMsg(String string, int index) { @Override public void waitUntilDone() throws InterruptedIOException { try { - waitUntilDone(Long.MAX_VALUE); + long cutoff = (EnvironmentEdgeManager.currentTime() + this.operationTimeout) * 1000L; Review comment: The type of operationTimeout is int, so seems no need to consider Long.MAX_VALUE case. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-26026) HBase Write may be stuck forever when using CompactingMemStore
[ https://issues.apache.org/jira/browse/HBASE-26026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370389#comment-17370389 ] chenglei commented on HBASE-26026: -- [~zhangduo],would please help me have a review at your convenienc ? thanks. > HBase Write may be stuck forever when using CompactingMemStore > -- > > Key: HBASE-26026 > URL: https://issues.apache.org/jira/browse/HBASE-26026 > Project: HBase > Issue Type: Bug > Components: in-memory-compaction >Affects Versions: 2.3.0, 2.4.0 >Reporter: chenglei >Assignee: chenglei >Priority: Major > > Sometimes I observed that HBase Write might be stuck in my hbase cluster > which enabling {{CompactingMemStore}}. I have simulated the problem by unit > test in my PR. > The problem is caused by {{CompactingMemStore.checkAndAddToActiveSize}} : > {code:java} > 425 private boolean checkAndAddToActiveSize(MutableSegment currActive, Cell > cellToAdd, > 426 MemStoreSizing memstoreSizing) { > 427if (shouldFlushInMemory(currActive, cellToAdd, memstoreSizing)) { > 428 if (currActive.setInMemoryFlushed()) { > 429flushInMemory(currActive); > 430if (setInMemoryCompactionFlag()) { > 431 // The thread is dispatched to do in-memory compaction in the > background > .. > } > {code} > In line 427, if {{currActive.getDataSize}} adding the size of {{cellToAdd}} > exceeds {{CompactingMemStore.inmemoryFlushSize}}, then {{currActive}} should > be flushed, {{MutableSegment.setInMemoryFlushed()}} is invoked in above line > 428 : > {code:java} > public boolean setInMemoryFlushed() { > return flushed.compareAndSet(false, true); > } > {code} > After set {{currActive.flushed}} to true, in above line 429 > {{flushInMemory(currActive)}} invokes > {{CompactingMemStore.pushActiveToPipeline}} : > {code:java} > protected void pushActiveToPipeline(MutableSegment currActive) { > if (!currActive.isEmpty()) { > pipeline.pushHead(currActive); > resetActive(); > } > } > {code} > In above {{CompactingMemStore.pushActiveToPipeline}} method , if the > {{currActive.cellSet}} is empty, then nothing is done. Due to concurrent > writes and because we first add cell size to {{currActive.getDataSize}} and > then actually add cell to {{currActive.cellSet}}, it is possible that > {{currActive.getDataSize}} could not accommodate {{cellToAdd}} but > {{currActive.cellSet}} is still empty if pending writes which not yet add > cells to {{currActive.cellSet}}. > So if the {{currActive.cellSet}} is empty now, then no {{ActiveSegment}} is > created, and new writes still continue target to {{currActive}}, but > {{currActive.flushed}} is true, {{currActive}} could not enter > {{flushInMemory(currActive)}} again,and new {{ActiveSegment}} could not be > created forever ! In the end all writes would be stuck. > In my opinion , once {{currActive.flushed}} is set true, it could not > continue use as {{ActiveSegment}} , and because of concurrent pending writes, > only after {{currActive.updatesLock.writeLock()}} is acquired(i.e. > {{currActive.waitForUpdates}} is called) in > {{CompactingMemStore.inMemoryCompaction}} ,we can safely say {{currActive}} > is empty or not. > My fix is remove the {{if (!currActive.isEmpty())}} check here and left the > check to background {{InMemoryCompactionRunnable}} after > {{currActive.waitForUpdates}} is called. An alternative fix is we use > synchronization mechanism in {{checkAndAddToActiveSize}} method to prevent > all writes , wait for all pending write completed(i.e. > currActive.waitForUpdates is called) and if {{currActive}} is still empty > ,then we set {{currActive.flushed}} back to false,but I am not inclined to > use so heavy synchronization in write path, and I think we would better > maintain lockless implementation for {{CompactingMemStore.add}} method just > as now and {{currActive.waitForUpdates}} would better be left in background > {{InMemoryCompactionRunnable}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26025) Add a flag to mark if the IOError can be solved by retry in thrift IOError
[ https://issues.apache.org/jira/browse/HBASE-26025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan resolved HBASE-26025. --- Hadoop Flags: Reviewed Resolution: Resolved > Add a flag to mark if the IOError can be solved by retry in thrift IOError > -- > > Key: HBASE-26025 > URL: https://issues.apache.org/jira/browse/HBASE-26025 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Yutong Xiao >Assignee: Yutong Xiao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 1.7.1, 2.4.5 > > > Currently, if an HBaseIOException occurs, the thrift client can only get the > error message. This is inconvenient for the client constructing a retry > mechanism to handle the exception. So I added a canRetry mark in IOError to > make the client side exception handling smarter. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26025) Add a flag to mark if the IOError can be solved by retry in thrift IOError
[ https://issues.apache.org/jira/browse/HBASE-26025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Reid Chan updated HBASE-26025: -- Fix Version/s: 1.7.1 > Add a flag to mark if the IOError can be solved by retry in thrift IOError > -- > > Key: HBASE-26025 > URL: https://issues.apache.org/jira/browse/HBASE-26025 > Project: HBase > Issue Type: Improvement > Components: Thrift >Reporter: Yutong Xiao >Assignee: Yutong Xiao >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 1.7.1, 2.4.5 > > > Currently, if an HBaseIOException occurs, the thrift client can only get the > error message. This is inconvenient for the client constructing a retry > mechanism to handle the exception. So I added a canRetry mark in IOError to > make the client side exception handling smarter. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Reidddddd merged pull request #3429: HBASE-26025 Add a flag to mark if the IOError can be solved by retry in thrift IOError
Reidd merged pull request #3429: URL: https://github.com/apache/hbase/pull/3429 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bsglz commented on a change in pull request #3419: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFuture…
bsglz commented on a change in pull request #3419: URL: https://github.com/apache/hbase/pull/3419#discussion_r659422655 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRequestFutureImpl.java ## @@ -1140,7 +1140,8 @@ private String buildDetailedErrorMsg(String string, int index) { @Override public void waitUntilDone() throws InterruptedIOException { try { - waitUntilDone(Long.MAX_VALUE); + long cutoff = (EnvironmentEdgeManager.currentTime() + this.operationTimeout) * 1000L; Review comment: > What if operationTimeout here is negative(which means no timeout), or Long.MAX_VALUE? Good point, exclude that cases seems better, will fix later, thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] rda3mon commented on pull request #3359: HBASE-25891 remove dependence storing wal filenames for backup
rda3mon commented on pull request #3359: URL: https://github.com/apache/hbase/pull/3359#issuecomment-869186011 > Will try to take a look tomorrow. It is a bit late in China, sorry... No problem. Have a look when you find some time. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on pull request #3359: HBASE-25891 remove dependence storing wal filenames for backup
Apache9 commented on pull request #3359: URL: https://github.com/apache/hbase/pull/3359#issuecomment-869175193 Will try to take a look tomorrow. It is a bit late in China, sorry... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25980) Master table.jsp pointed at meta throws 500 when no all replicas are online
[ https://issues.apache.org/jira/browse/HBASE-25980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-25980. --- Fix Version/s: 2.4.5 2.3.6 Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.3+. Thanks [~GeorryHuang] for contributing. > Master table.jsp pointed at meta throws 500 when no all replicas are online > --- > > Key: HBASE-25980 > URL: https://issues.apache.org/jira/browse/HBASE-25980 > Project: HBase > Issue Type: Bug > Components: master, meta replicas, UI >Affects Versions: 3.0.0-alpha-1, 2.5.0, 2.3.5 >Reporter: Nick Dimiduk >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > With a replica in a transition state, the UI renders, > {noformat} > HTTP ERROR 500 > Problem accessing /table.jsp. Reason: > Server Error > Caused by: > org.apache.hadoop.hbase.NotAllMetaRegionsOnlineException: Timed out; 1ms > at > org.apache.hadoop.hbase.zookeeper.MetaTableLocator.waitMetaRegionLocation(MetaTableLocator.java:190) > at > org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:264) > at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26034) Add support to take parallel backups
[ https://issues.apache.org/jira/browse/HBASE-26034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mallikarjun updated HBASE-26034: Description: Details to be filled. (was: TODO:) > Add support to take parallel backups > > > Key: HBASE-26034 > URL: https://issues.apache.org/jira/browse/HBASE-26034 > Project: HBase > Issue Type: Improvement > Components: backuprestore >Affects Versions: 3.0.0-alpha-2 >Reporter: Mallikarjun >Assignee: Mallikarjun >Priority: Major > Fix For: 3.0.0-alpha-2 > > > Details to be filled. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26034) Add support to take parallel backups
[ https://issues.apache.org/jira/browse/HBASE-26034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mallikarjun updated HBASE-26034: Summary: Add support to take parallel backups (was: Add support to take multiple parallel backup) > Add support to take parallel backups > > > Key: HBASE-26034 > URL: https://issues.apache.org/jira/browse/HBASE-26034 > Project: HBase > Issue Type: Improvement > Components: backuprestore >Affects Versions: 3.0.0-alpha-2 >Reporter: Mallikarjun >Assignee: Mallikarjun >Priority: Major > Fix For: 3.0.0-alpha-2 > > > TODO: -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-26031) Validate nightly builds run on new ci workers hbase11-hbase15
[ https://issues.apache.org/jira/browse/HBASE-26031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370228#comment-17370228 ] Duo Zhang commented on HBASE-26031: --- The hbase10 node is an old one. The old machines are hbase1-hbase10(Yes, started from 1, not 0, not very programming...) > Validate nightly builds run on new ci workers hbase11-hbase15 > - > > Key: HBASE-26031 > URL: https://issues.apache.org/jira/browse/HBASE-26031 > Project: HBase > Issue Type: Task > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Attachments: image-2021-06-24-16-14-03-721.png > > > Per slack, asf infra has finished adding in nodes hbase10-hbase15 to > ci-hadoop. > make sure they can run nightly. > # Set labels for all these node to "hbase-staging" > # Push a feature branch off of current HEAD that updates the agent labels to > use "hbase-staging" > # trigger a bunch of runs. make sure *something* runs on each of the nodes > # Set labels for the nodes to "hbase" > # delete feature branch -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25891) Remove dependence storing WAL filenames for backup
[ https://issues.apache.org/jira/browse/HBASE-25891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370227#comment-17370227 ] Mallikarjun commented on HBASE-25891: - [~anoop.hbase] [~stack] [~zhangduo] Can someone help me in getting this reviewed please. > Remove dependence storing WAL filenames for backup > -- > > Key: HBASE-25891 > URL: https://issues.apache.org/jira/browse/HBASE-25891 > Project: HBase > Issue Type: Improvement > Components: backuprestore >Affects Versions: 3.0.0-alpha-1 >Reporter: Mallikarjun >Assignee: Mallikarjun >Priority: Major > Fix For: 3.0.0-alpha-1 > > > Context: > Currently WAL logs are stored in `backup:system` meta table > {code:java} > // code placeholder > wals:preprod-dn-1%2C16020%2C1614844389000.1621996160175 column=meta:backupId, > timestamp=1622003479895, value=backup_1622003358258 > wals:preprod-dn-1%2C16020%2C1614844389000.1621996160175 column=meta:file, > timestamp=1622003479895, > value=hdfs://store/hbase/oldWALs/preprod-dn-1%2C16020%2C1614844389000.1621996160175 > wals:preprod-dn-1%2C16020%2C1614844389000.1621996160175 column=meta:root, > timestamp=1622003479895, value=s3a://2021-05-25--21-45-00--full/set1 > wals:preprod-dn-1%2C16020%2C1614844389000.1621999760280 column=meta:backupId, > timestamp=1622003479895, value=backup_1622003358258 > wals:preprod-dn-1%2C16020%2C1614844389000.1621999760280 column=meta:file, > timestamp=1622003479895, > value=hdfs://store/hbase/oldWALs/preprod-dn-1%2C16020%2C1614844389000.1621999760280 > wals:preprod-dn-1%2C16020%2C1614844389000.1621999760280 column=meta:root, > timestamp=1622003479895, value=s3a://2021-05-25--21-45-00--full/set1 > {code} > Also, Every backup (Incremental and Full) performs a log roll just before > taking backup and stores what was the timestamp at which log roll was > performed per regionserver per backup using following format. > > {code:java} > // code placeholder > rslogts:hdfs://xx.xx.xx.xx:8020/tmp/backup_yaktest\x00preprod-dn-2:16020 > column=meta:rs-log-ts, timestamp=1622887363301,value=\x00\x00\x01y\xDB\x81ar > rslogts:hdfs://xx.xx.xx.xx:8020/tmp/backup_yaktest\x00preprod-dn-3:16020 > column=meta:rs-log-ts, timestamp=1622887363294, value=\x00\x00\x01y\xDB\x81aP > rslogts:hdfs://xx.xx.xx.xx:8020/tmp/backup_yaktest\x00preprod-dn-1:16020 > column=meta:rs-log-ts, timestamp=1622887363275, > value=\x00\x00\x01y\xDB\x81\x85 > {code} > > > There are 2 cases for which WAL log refrences stored in `backup:system` and > are being used. > 1. To cleanup WAL's for which backup is already taken using > `BackupLogCleaner` > Since log roll timestamp is stored as part of backup per regionserver. We can > check all previous successfull backup's and then identify which logs are to > be retained and which ones are to be cleaned up as follows > * Identify which are the latest successful backups performed per table. > * Per backup identified above, identify what is the oldest log rolled > timestamp perfomed per regionserver per table. > * All those WAL's which are older than oldest log rolled timestamp perfomed > for any table backed can be removed by `BackupLogCleaner` > > 2. During incremental backup, to check system table if there are any > duplicate WAL's for which backup is taken again. > * Incremental backup already identifies which all WAL's to be backed up > using `rslogts:` mentioned above. > * Additionally it checks `wals:` to ensure no logs are backuped for second > time. And this is redundant and not seen any extra benefit. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3373: HBASE-25980 Master table.jsp pointed at meta throws 500 when no all r…
Apache9 merged pull request #3373: URL: https://github.com/apache/hbase/pull/3373 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 merged pull request #3374: HBASE-25980 Master table.jsp pointed at meta throws 500 when no all r…
Apache9 merged pull request #3374: URL: https://github.com/apache/hbase/pull/3374 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (HBASE-25914) Provide slow/large logs on RegionServer UI
[ https://issues.apache.org/jira/browse/HBASE-25914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-25914. --- Fix Version/s: 2.5.0 3.0.0-alpha-1 Hadoop Flags: Reviewed Resolution: Fixed Pushed to master and branch-2. Thanks [~GeorryHuang] for contributing. > Provide slow/large logs on RegionServer UI > -- > > Key: HBASE-25914 > URL: https://issues.apache.org/jira/browse/HBASE-25914 > Project: HBase > Issue Type: Improvement > Components: regionserver, UI >Affects Versions: 3.0.0-alpha-1, 2.5.0 >Reporter: Zhuoyue Huang >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > Attachments: callDetails.png, largeLog.png, opeartionDetails1.png, > operationDetails2.png, slowLog.png > > > Pulling slow/large log from in-memory queues on RegionServer then display > details info in RegionServer status UI -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3321: HBASE-25914 Provide slow/large logs on RegionServer UI
Apache9 merged pull request #3321: URL: https://github.com/apache/hbase/pull/3321 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 merged pull request #3319: HBASE-25914 Provide slow/large logs on RegionServer UI
Apache9 merged pull request #3319: URL: https://github.com/apache/hbase/pull/3319 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-26034) Add support to take multiple parallel backup
[ https://issues.apache.org/jira/browse/HBASE-26034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mallikarjun updated HBASE-26034: Description: TODO: > Add support to take multiple parallel backup > > > Key: HBASE-26034 > URL: https://issues.apache.org/jira/browse/HBASE-26034 > Project: HBase > Issue Type: Improvement > Components: backuprestore >Affects Versions: 3.0.0-alpha-2 >Reporter: Mallikarjun >Assignee: Mallikarjun >Priority: Major > Fix For: 3.0.0-alpha-2 > > > TODO: -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-25976) Implement a master based ReplicationTracker
[ https://issues.apache.org/jira/browse/HBASE-25976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17370225#comment-17370225 ] Duo Zhang commented on HBASE-25976: --- Based on HBASE-26029, we will remove this class soon, so no release note. > Implement a master based ReplicationTracker > --- > > Key: HBASE-25976 > URL: https://issues.apache.org/jira/browse/HBASE-25976 > Project: HBase > Issue Type: Sub-task > Components: Replication >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0 > > > Now the only thing we care about is the live region servers and we can get > this information from master, so let's do it to remove the dependencies on > zookeeper. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-26034) Add support to take multiple parallel backup
Mallikarjun created HBASE-26034: --- Summary: Add support to take multiple parallel backup Key: HBASE-26034 URL: https://issues.apache.org/jira/browse/HBASE-26034 Project: HBase Issue Type: Improvement Components: backuprestore Affects Versions: 3.0.0-alpha-2 Reporter: Mallikarjun Assignee: Mallikarjun Fix For: 3.0.0-alpha-2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-26015) Should implement getRegionServers(boolean) method in AsyncAdmin
[ https://issues.apache.org/jira/browse/HBASE-26015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-26015. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.3+. Thanks [~GeorryHuang] for contributing. > Should implement getRegionServers(boolean) method in AsyncAdmin > --- > > Key: HBASE-26015 > URL: https://issues.apache.org/jira/browse/HBASE-26015 > Project: HBase > Issue Type: Task > Components: Admin, Client >Reporter: Duo Zhang >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > We have this method in Admin but not in AsyncAdmin, we should align these two > interfaces. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-26015) Should implement getRegionServers(boolean) method in AsyncAdmin
[ https://issues.apache.org/jira/browse/HBASE-26015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-26015: -- Fix Version/s: 2.4.5 2.3.6 2.5.0 3.0.0-alpha-1 > Should implement getRegionServers(boolean) method in AsyncAdmin > --- > > Key: HBASE-26015 > URL: https://issues.apache.org/jira/browse/HBASE-26015 > Project: HBase > Issue Type: Task > Components: Admin, Client >Reporter: Duo Zhang >Assignee: Zhuoyue Huang >Priority: Major > Fix For: 3.0.0-alpha-1, 2.5.0, 2.3.6, 2.4.5 > > > We have this method in Admin but not in AsyncAdmin, we should align these two > interfaces. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #3406: HBASE-26015 Should implement getRegionServers(boolean) method in Asyn…
Apache9 merged pull request #3406: URL: https://github.com/apache/hbase/pull/3406 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #3419: HBASE-26027 The calling of HTable.batch blocked at AsyncRequestFuture…
Apache9 commented on a change in pull request #3419: URL: https://github.com/apache/hbase/pull/3419#discussion_r659325906 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncRequestFutureImpl.java ## @@ -1140,7 +1140,8 @@ private String buildDetailedErrorMsg(String string, int index) { @Override public void waitUntilDone() throws InterruptedIOException { try { - waitUntilDone(Long.MAX_VALUE); + long cutoff = (EnvironmentEdgeManager.currentTime() + this.operationTimeout) * 1000L; Review comment: What if operationTimeout here is negative(which means no timeout), or Long.MAX_VALUE? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache9 commented on a change in pull request #3425: HBASE-25991 Do compaction on compaction server
Apache9 commented on a change in pull request #3425: URL: https://github.com/apache/hbase/pull/3425#discussion_r659314613 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionTask.java ## @@ -0,0 +1,169 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hbase.compactionserver; + +import java.util.List; +import org.apache.hadoop.hbase.ServerName; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.yetus.audience.InterfaceAudience; +import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos; + +@InterfaceAudience.Private +public final class CompactionTask { + private ServerName rsServerName; + private RegionInfo regionInfo; + private ColumnFamilyDescriptor cfd; + private CompactionContext compactionContext; + private HStore store; Review comment: OK, the HStore is here... ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/compactionserver/CompactionThreadManager.java ## @@ -19,41 +18,483 @@ package org.apache.hadoop.hbase.compactionserver; import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.OptionalLong; +import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap; +import java.util.concurrent.RejectedExecutionHandler; +import java.util.concurrent.ThreadPoolExecutor; +import java.util.concurrent.TimeUnit; +import java.util.stream.Collectors; import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hbase.ChoreService; import org.apache.hadoop.hbase.ServerName; import org.apache.hadoop.hbase.client.AsyncRegionServerAdmin; +import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.hadoop.hbase.client.TableDescriptor; +import org.apache.hadoop.hbase.fs.HFileSystem; +import org.apache.hadoop.hbase.monitoring.MonitoredTask; +import org.apache.hadoop.hbase.monitoring.TaskMonitor; +import org.apache.hadoop.hbase.regionserver.HRegion; +import org.apache.hadoop.hbase.regionserver.HRegionFileSystem; +import org.apache.hadoop.hbase.regionserver.HStore; +import org.apache.hadoop.hbase.regionserver.HStoreFile; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionContext; +import org.apache.hadoop.hbase.regionserver.compactions.CompactionLifeCycleTracker; +import org.apache.hadoop.hbase.regionserver.throttle.PressureAwareCompactionThroughputController; +import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControllerService; +import org.apache.hadoop.hbase.util.Bytes; +import org.apache.hadoop.hbase.util.CommonFSUtils; +import org.apache.hadoop.hbase.util.FSTableDescriptors; +import org.apache.hadoop.hbase.util.FutureUtils; +import org.apache.hadoop.hbase.util.Pair; +import org.apache.hadoop.hbase.util.StealJobQueue; import org.apache.yetus.audience.InterfaceAudience; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.hbase.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; + +import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionRequest.Builder; +import org.apache.hadoop.hbase.shaded.protobuf.generated.CompactionProtos.CompleteCompactionResponse; @InterfaceAudience.Private -public class CompactionThreadManager { +public class CompactionThreadManager implements ThroughputControllerService { private static Logger LOG =
[GitHub] [hbase] jojochuang commented on a change in pull request #3426: HBASE-26032 Make HRegion.getStores() an O(1) operation
jojochuang commented on a change in pull request #3426: URL: https://github.com/apache/hbase/pull/3426#discussion_r659299125 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Region.java ## @@ -92,7 +92,7 @@ * Use with caution. Exposed for use of fixup utilities. * @return a list of the Stores managed by this region */ - List getStores(); + Collection getStores(); Review comment: Yeah, I forgot to mention this is a breaking change, and was wondering if it would be acceptable for HBase 3 (master branch) only. It might not worth the effort, though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org