[GitHub] [hbase] gkanade commented on pull request #2201: HBASE-24713 backport to branch-2
gkanade commented on pull request #2201: URL: https://github.com/apache/hbase/pull/2201#issuecomment-669016394 @ramkrish86 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] gkanade opened a new pull request #2201: HBASE-24713 backport to branch-2
gkanade opened a new pull request #2201: URL: https://github.com/apache/hbase/pull/2201 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bsglz opened a new pull request #2200: HBASE-24821 simplify the logic of getRegionInfo in TestFlushFromClien…
bsglz opened a new pull request #2200: URL: https://github.com/apache/hbase/pull/2200 …t to reduce redundancy code This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24821) simplify the logic of getRegionInfo in TestFlushFromClient to reduce redundancy code
Zheng Wang created HBASE-24821: -- Summary: simplify the logic of getRegionInfo in TestFlushFromClient to reduce redundancy code Key: HBASE-24821 URL: https://issues.apache.org/jira/browse/HBASE-24821 Project: HBase Issue Type: Improvement Components: test Reporter: Zheng Wang Assignee: Zheng Wang Current logic: {code:java} private List getRegionInfo() { return TEST_UTIL.getHBaseCluster().getLiveRegionServerThreads().stream() .map(JVMClusterUtil.RegionServerThread::getRegionServer) .flatMap(r -> r.getRegions().stream()) .filter(r -> r.getTableDescriptor().getTableName().equals(tableName)) .collect(Collectors.toList()); } {code} The MiniHBaseCluster has similar method to do same thing. So it could just directly call: {code:java} private List getRegionInfo() { return TEST_UTIL.getHBaseCluster().getRegions(tableName); } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2199: HBASE-24819 Fix flaky test TestRaceBetweenSCPAndDTP and TestRaceBetwe…
Apache-HBase commented on pull request #2199: URL: https://github.com/apache/hbase/pull/2199#issuecomment-669008198 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 57s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ branch-2.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 15s | branch-2.2 passed | | +1 :green_heart: | compile | 0m 54s | branch-2.2 passed | | +1 :green_heart: | checkstyle | 1m 22s | branch-2.2 passed | | +1 :green_heart: | shadedjars | 4m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | branch-2.2 passed | | +0 :ok: | spotbugs | 3m 9s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 7s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 44s | the patch passed | | +1 :green_heart: | compile | 0m 56s | the patch passed | | +1 :green_heart: | javac | 0m 56s | the patch passed | | +1 :green_heart: | checkstyle | 1m 18s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 8s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 25m 9s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | | +1 :green_heart: | findbugs | 3m 23s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 167m 8s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | The patch does not generate ASF License warnings. | | | | 231m 24s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2199/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2199 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux c25111737a19 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2199/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 363a31a5b3 | | Default Java | 1.8.0_181 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2199/1/testReport/ | | Max. process+thread count | 4336 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2199/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171274#comment-17171274 ] ramkrishna.s.vasudevan commented on HBASE-24754: [~sreenivasulureddy] - just replace the entire code with what we have in branch-1.3 where we don check for any tag and its attributes from the tags and just {code} Put p = put; for (List cells : p.getFamilyCellMap().values()) { for (Cell cell : cells) { KeyValue kv = KeyValueUtil.ensureKeyValueType(cell); if (map.add(kv)) {// don't count duplicated kv into size curSize += kv.heapSize(); } } } If this still does not help then the only issue should be with the Compartor but at a first glance i don find anything there. {code} > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24820) [hbase-thirdparty] Add jersey-hk2 when shading jersey
Duo Zhang created HBASE-24820: - Summary: [hbase-thirdparty] Add jersey-hk2 when shading jersey Key: HBASE-24820 URL: https://issues.apache.org/jira/browse/HBASE-24820 Project: HBase Issue Type: Task Components: dependencies, hbase-thirdparty Reporter: Duo Zhang Assignee: Duo Zhang Fix For: thirdparty-3.4.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24795) RegionMover should deal with unknown (split/merged) regions
[ https://issues.apache.org/jira/browse/HBASE-24795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171261#comment-17171261 ] Hudson commented on HBASE-24795: Results for branch branch-2 [build #2768 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2768/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2768/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2765/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > RegionMover should deal with unknown (split/merged) regions > --- > > Key: HBASE-24795 > URL: https://issues.apache.org/jira/browse/HBASE-24795 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > For a cluster with very high load, it is quite common to see flush/compaction > happening every minute on each RegionServer. And we have quite high chances > of multiple regions going through splitting/merging. > RegionMover, while unloading all regions (graceful stop), writes down all > regions to a local file and while loading them back (graceful start), ensures > to bring every single region back from other RSs. While loading regions back, > even if a single region can't be moved back, RegionMover considers load() > failure. We miss out on possibilities of some regions going through > split/merge process and the fact that not all regions written to local file > might even exist anymore. Hence, RegionMover should gracefully handle moving > any unknown region without marking load() failed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24808) skip empty log cleaner delegate class names (WAS => cleaner.CleanerChore: Can NOT create CleanerDelegate= ClassNotFoundException)
[ https://issues.apache.org/jira/browse/HBASE-24808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171260#comment-17171260 ] Hudson commented on HBASE-24808: Results for branch branch-2 [build #2768 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2768/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2768/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2756/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (x) {color:red}-1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2765/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > skip empty log cleaner delegate class names (WAS => cleaner.CleanerChore: Can > NOT create CleanerDelegate= ClassNotFoundException) > - > > Key: HBASE-24808 > URL: https://issues.apache.org/jira/browse/HBASE-24808 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Trivial > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > 2020-07-31 00:19:49,839 WARN [master/ps0753:16000:becomeActiveMaster] > cleaner.CleanerChore: Can NOT create CleanerDelegate= > java.lang.ClassNotFoundException: > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.master.cleaner.CleanerChore.newFileCleaner(CleanerChore.java:173) > at > org.apache.hadoop.hbase.master.cleaner.CleanerChore.initCleanerChain(CleanerChore.java:155) > at > org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:105) > at > org.apache.hadoop.hbase.master.cleaner.HFileCleaner.(HFileCleaner.java:139) > at > org.apache.hadoop.hbase.master.cleaner.HFileCleaner.(HFileCleaner.java:120) > at > org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1424) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1025) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2189) > at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:609) > at java.lang.Thread.run(Thread.java:748) > > This is the config: > > > hbase.master.hfilecleaner.plugins > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] infraio opened a new pull request #2199: HBASE-24819 Fix flaky test TestRaceBetweenSCPAndDTP and TestRaceBetwe…
infraio opened a new pull request #2199: URL: https://github.com/apache/hbase/pull/2199 …enSCPAndTRSP for branch-2.2 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24819) Fix flaky test TestRaceBetweenSCPAndDTP and TestRaceBetweenSCPAndTRSP for branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-24819: --- Summary: Fix flaky test TestRaceBetweenSCPAndDTP and TestRaceBetweenSCPAndTRSP for branch-2.2 (was: Fix flaky test TestRaceBetweenSCPAndDTP for branch-2.2) > Fix flaky test TestRaceBetweenSCPAndDTP and TestRaceBetweenSCPAndTRSP for > branch-2.2 > > > Key: HBASE-24819 > URL: https://issues.apache.org/jira/browse/HBASE-24819 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > Backport HBASE-23805 and HBASE-24338 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24819) Fix flaky test TestRaceBetweenSCPAndDTP for branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang reassigned HBASE-24819: -- Assignee: Guanghao Zhang > Fix flaky test TestRaceBetweenSCPAndDTP for branch-2.2 > -- > > Key: HBASE-24819 > URL: https://issues.apache.org/jira/browse/HBASE-24819 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > > Backport HBASE-23805 and HBASE-24338 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24819) Fix flaky test TestRaceBetweenSCPAndDTP for branch-2.2
Guanghao Zhang created HBASE-24819: -- Summary: Fix flaky test TestRaceBetweenSCPAndDTP for branch-2.2 Key: HBASE-24819 URL: https://issues.apache.org/jira/browse/HBASE-24819 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang Backport HBASE-23805 and HBASE-24338 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24818) Fix the precommit error for branch-1
[ https://issues.apache.org/jira/browse/HBASE-24818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang resolved HBASE-24818. Resolution: Duplicate Sorry. Duplicate with HBASE-24816. > Fix the precommit error for branch-1 > > > Key: HBASE-24818 > URL: https://issues.apache.org/jira/browse/HBASE-24818 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Priority: Major > Fix For: 1.7.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24818) Fix the precommit error for branch-1
Guanghao Zhang created HBASE-24818: -- Summary: Fix the precommit error for branch-1 Key: HBASE-24818 URL: https://issues.apache.org/jira/browse/HBASE-24818 Project: HBase Issue Type: Sub-task Reporter: Guanghao Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24818) Fix the precommit error for branch-1
[ https://issues.apache.org/jira/browse/HBASE-24818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Guanghao Zhang updated HBASE-24818: --- Fix Version/s: 1.7.0 > Fix the precommit error for branch-1 > > > Key: HBASE-24818 > URL: https://issues.apache.org/jira/browse/HBASE-24818 > Project: HBase > Issue Type: Sub-task >Reporter: Guanghao Zhang >Priority: Major > Fix For: 1.7.0 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] joshelser commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668872729 > TestWALEntryStream timed out for me too. Will dig in. Ah, a mocking issue. Fixing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668870260 Interesting. TestWALEntryStream timed out for me too. Will dig in. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24812) Fix the precommit error for branch-2.2
[ https://issues.apache.org/jira/browse/HBASE-24812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171179#comment-17171179 ] Hudson commented on HBASE-24812: Results for branch branch-2.2 [build #925 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/925/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/925//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/925//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/925//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Fix the precommit error for branch-2.2 > -- > > Key: HBASE-24812 > URL: https://issues.apache.org/jira/browse/HBASE-24812 > Project: HBase > Issue Type: Bug >Affects Versions: 1.7.0, 2.2.6 >Reporter: Guanghao Zhang >Assignee: Guanghao Zhang >Priority: Major > Fix For: 2.2.6 > > > [https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/view/change-requests/job/PR-2187/2/console] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2196: HBASE-24750 : All ExecutorService should use guava ThreadFactoryBuilder
Apache-HBase commented on pull request #2196: URL: https://github.com/apache/hbase/pull/2196#issuecomment-668844676 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 33s | master passed | | +1 :green_heart: | compile | 4m 8s | master passed | | +1 :green_heart: | shadedjars | 5m 38s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 15s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 25s | the patch passed | | +1 :green_heart: | compile | 4m 9s | the patch passed | | +1 :green_heart: | javac | 4m 9s | the patch passed | | +1 :green_heart: | shadedjars | 5m 33s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 3m 18s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 16s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 1s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 0m 42s | hbase-zookeeper in the patch passed. | | +1 :green_heart: | unit | 1m 51s | hbase-procedure in the patch passed. | | +1 :green_heart: | unit | 150m 11s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 4m 31s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 9m 50s | hbase-backup in the patch passed. | | +1 :green_heart: | unit | 1m 10s | hbase-it in the patch passed. | | +1 :green_heart: | unit | 2m 2s | hbase-examples in the patch passed. | | | | 210m 59s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2196 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 0378fd41ce50 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/testReport/ | | Max. process+thread count | 4926 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-client hbase-zookeeper hbase-procedure hbase-server hbase-thrift hbase-backup hbase-it hbase-examples U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
Apache-HBase commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668841506 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 30s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 31s | master passed | | +1 :green_heart: | compile | 1m 15s | master passed | | +1 :green_heart: | shadedjars | 5m 35s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 51s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 33s | the patch passed | | +1 :green_heart: | compile | 1m 19s | the patch passed | | +1 :green_heart: | javac | 1m 19s | the patch passed | | +1 :green_heart: | shadedjars | 5m 40s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 35s | hbase-server generated 1 new + 28 unchanged - 0 fixed = 29 total (was 28) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 33s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 145m 27s | hbase-server in the patch failed. | | | | 171m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2193 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2c92a83078cb 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 1.8.0_232 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/testReport/ | | Max. process+thread count | 4593 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2198: HBASE-24817 Allow configuring WALEntry filters on ReplicationSource
Apache-HBase commented on pull request #2198: URL: https://github.com/apache/hbase/pull/2198#issuecomment-668840966 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 42s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-2 Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 57s | branch-2 passed | | +1 :green_heart: | checkstyle | 1m 10s | branch-2 passed | | +1 :green_heart: | spotbugs | 2m 7s | branch-2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 23s | the patch passed | | -0 :warning: | checkstyle | 1m 7s | hbase-server: The patch generated 2 new + 1 unchanged - 15 fixed = 3 total (was 16) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 12m 9s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 14s | The patch does not generate ASF License warnings. | | | | 35m 30s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2198/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2198 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 27f06bc51dd0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 8979202c7a | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2198/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2198/1/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2196: HBASE-24750 : All ExecutorService should use guava ThreadFactoryBuilder
Apache-HBase commented on pull request #2196: URL: https://github.com/apache/hbase/pull/2196#issuecomment-668838894 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 11s | master passed | | +1 :green_heart: | compile | 4m 35s | master passed | | +1 :green_heart: | shadedjars | 5m 44s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 21s | hbase-backup in master failed. | | -0 :warning: | javadoc | 0m 25s | hbase-client in master failed. | | -0 :warning: | javadoc | 0m 17s | hbase-common in master failed. | | -0 :warning: | javadoc | 0m 23s | hbase-examples in master failed. | | -0 :warning: | javadoc | 0m 18s | hbase-procedure in master failed. | | -0 :warning: | javadoc | 0m 39s | hbase-server in master failed. | | -0 :warning: | javadoc | 0m 48s | hbase-thrift in master failed. | | -0 :warning: | javadoc | 0m 17s | hbase-zookeeper in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 2s | the patch passed | | +1 :green_heart: | compile | 4m 33s | the patch passed | | +1 :green_heart: | javac | 4m 33s | the patch passed | | +1 :green_heart: | shadedjars | 5m 48s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 17s | hbase-common in the patch failed. | | -0 :warning: | javadoc | 0m 25s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 17s | hbase-zookeeper in the patch failed. | | -0 :warning: | javadoc | 0m 17s | hbase-procedure in the patch failed. | | -0 :warning: | javadoc | 0m 39s | hbase-server in the patch failed. | | -0 :warning: | javadoc | 0m 49s | hbase-thrift in the patch failed. | | -0 :warning: | javadoc | 0m 20s | hbase-backup in the patch failed. | | -0 :warning: | javadoc | 0m 22s | hbase-examples in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 30s | hbase-common in the patch passed. | | +1 :green_heart: | unit | 1m 11s | hbase-client in the patch passed. | | +1 :green_heart: | unit | 0m 43s | hbase-zookeeper in the patch passed. | | +1 :green_heart: | unit | 1m 36s | hbase-procedure in the patch passed. | | +1 :green_heart: | unit | 131m 6s | hbase-server in the patch passed. | | +1 :green_heart: | unit | 4m 22s | hbase-thrift in the patch passed. | | +1 :green_heart: | unit | 10m 8s | hbase-backup in the patch passed. | | +1 :green_heart: | unit | 1m 7s | hbase-it in the patch passed. | | +1 :green_heart: | unit | 1m 41s | hbase-examples in the patch passed. | | | | 194m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2196 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 9efd7ae56111 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-backup.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-examples.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-procedure.txt |
[GitHub] [hbase] Apache-HBase commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
Apache-HBase commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668838364 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 12s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 56s | master passed | | +1 :green_heart: | compile | 1m 23s | master passed | | +1 :green_heart: | shadedjars | 5m 43s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-hadoop-compat in master failed. | | -0 :warning: | javadoc | 0m 38s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 16s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 4s | the patch passed | | +1 :green_heart: | compile | 1m 22s | the patch passed | | +1 :green_heart: | javac | 1m 22s | the patch passed | | +1 :green_heart: | shadedjars | 5m 47s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-hadoop-compat in the patch failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 33s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 134m 41s | hbase-server in the patch failed. | | | | 162m 51s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2193 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 87b66f02ce48 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/testReport/ | | Max. process+thread count | 4261 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] tedyu commented on a change in pull request #2196: HBASE-24750 : All ExecutorService should use guava ThreadFactoryBuilder
tedyu commented on a change in pull request #2196: URL: https://github.com/apache/hbase/pull/2196#discussion_r465341971 ## File path: hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/regionserver/LogRollBackupSubprocedurePool.java ## @@ -62,10 +62,9 @@ public LogRollBackupSubprocedurePool(String name, Configuration conf) { LogRollRegionServerProcedureManager.BACKUP_TIMEOUT_MILLIS_DEFAULT); int threads = conf.getInt(CONCURENT_BACKUP_TASKS_KEY, DEFAULT_CONCURRENT_BACKUP_TASKS); this.name = name; -executor = -new ThreadPoolExecutor(1, threads, keepAlive, TimeUnit.SECONDS, -new LinkedBlockingQueue<>(), -Threads.newDaemonThreadFactory("rs(" + name + ")-backup")); +executor = new ThreadPoolExecutor(1, threads, keepAlive, TimeUnit.SECONDS, + new LinkedBlockingQueue<>(), + new ThreadFactoryBuilder().setNameFormat("rs(" + name + ")-backup-pool-%d").build()); Review comment: Should the thread names follow existing format (dropping '-pool') ? People may have got used to the current format during debugging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] tedyu commented on a change in pull request #2196: HBASE-24750 : All ExecutorService should use guava ThreadFactoryBuilder
tedyu commented on a change in pull request #2196: URL: https://github.com/apache/hbase/pull/2196#discussion_r465341971 ## File path: hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/regionserver/LogRollBackupSubprocedurePool.java ## @@ -62,10 +62,9 @@ public LogRollBackupSubprocedurePool(String name, Configuration conf) { LogRollRegionServerProcedureManager.BACKUP_TIMEOUT_MILLIS_DEFAULT); int threads = conf.getInt(CONCURENT_BACKUP_TASKS_KEY, DEFAULT_CONCURRENT_BACKUP_TASKS); this.name = name; -executor = -new ThreadPoolExecutor(1, threads, keepAlive, TimeUnit.SECONDS, -new LinkedBlockingQueue<>(), -Threads.newDaemonThreadFactory("rs(" + name + ")-backup")); +executor = new ThreadPoolExecutor(1, threads, keepAlive, TimeUnit.SECONDS, + new LinkedBlockingQueue<>(), + new ThreadFactoryBuilder().setNameFormat("rs(" + name + ")-backup-pool-%d").build()); Review comment: Should the thread names follow existing format ? People may have got used to the current format during debugging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-18070) Enable memstore replication for meta replica
[ https://issues.apache.org/jira/browse/HBASE-18070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171136#comment-17171136 ] Michael Stack commented on HBASE-18070: --- Implementation notes by Huaxiang and myself. > Enable memstore replication for meta replica > > > Key: HBASE-18070 > URL: https://issues.apache.org/jira/browse/HBASE-18070 > Project: HBase > Issue Type: New Feature >Reporter: Hua Xiang >Assignee: Huaxiang Sun >Priority: Major > > Based on the current doc, memstore replication is not enabled for meta > replica. Memstore replication will be a good improvement for meta replica. > Create jira to track this effort (feasibility, design, implementation, etc). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171129#comment-17171129 ] Zach York commented on HBASE-24749: --- Yep, could do that as well. I was suggesting opening up only the one CF for writing outside of Master if it turns out having a single writer can't scale, but I think for now the single writer approach should work. Whether that writer is locked to Master or the RS hosting the CF, I don't have particularly strong feelings on. We would want to test how much overhead the additional hop would take (if Master is the writer, but meta is hosted on a different RS). > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171121#comment-17171121 ] Michael Stack commented on HBASE-24749: --- One comment on the [~zyork] approach is that if a dedicated hbase:meta CF for hfiles then there could still be one-writer only; it would just be the RS that was hosting the Region; not Master. > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171116#comment-17171116 ] Tak-Lon (Stephen) Wu edited comment on HBASE-24749 at 8/4/20, 8:59 PM: --- sorry for the delay, I was out few days last week. {quote}every flush and compaction will result in an update inline w/ the flush/compaction completion – if it fails, the flush/compaction fail? {quote} if updating hfile set in {{hbase:meta}} fails, it should be considered as failure if this feature is enabled. do you have concern on blocking the actual flush to be completed ? (it should be similar to other feature like {{hbase:quota}}) {quote}Master would update, or RS writes meta, a violation of a simplification we made trying to ensure one-writer {quote} for ensuring one-writer to {{hbase:meta}}, this is a good note and we haven't considered one writer scenario yet. I'm not sure the right way but for flush should be happened on the RS side, then either the RS create directly connection to {{hbase:meta}} with a limit on writing to only this column family outside of the master (suggested by [~zyork], pending investigation) or as you suggested that we package the hfile set information with a RPC call to Master and Master updates the hfile set. the amount of traffic (direct table connection or RPC call) should be the same, I still need to compare if the overhead (throughput) have any difference. In addition, I will try to come up a set of sub-tasks and update the proposal doc the coming week. please bear with me, the plan may have some transition tasks (the goal is to have delivery with stages), e.g. 1. having the separate system table first, then have followup tasks to 2). compare the migration into the {{hbase:meta}} and actually 3). merge into {{hbase:meta}} (as a throughput sanity check) was (Author: taklwu): sorry for the delay, I was out few days last week. {quote}every flush and compaction will result in an update inline w/ the flush/compaction completion – if it fails, the flush/compaction fail? {quote} if updating hfile set in {{hbase:meta}} fails, it should be considered as failure if this feature is enabled. do you have concern on blocking the actual flush to be completed ? (it should be similar to other feature like {{hbase:quota}}) {quote}Master would update, or RS writes meta, a violation of a simplification we made trying to ensure one-writer {quote} for ensuring one-writer to {{hbase:meta}}, this is a good note and we haven't considered one writer scenario yet. I'm not sure the right way but for flush should be happened on the RS side, then either the RS create directly connection to {{hbase:meta}} with a limit on writing to only this column family outside of the master (suggested by [~zyork], pending investigation) or as you suggested that we package the hfile set information with a RPC call to Master and Master updates the hfile set. the amount of traffic (direct table connection or RPC call) should be the same, I still need to compare if the overhead (throughput) have any difference. In addition, I will try to come up a set of sub-tasks and update the proposal doc the coming week. please bear with me, the plan may have some transition tasks, e.g. 1. having the separate system table first, then have followup tasks to 2). compare the migration into the {{hbase:meta}} and actually 3). merge into {{hbase:meta}} (as a throughput sanity check) > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes
[GitHub] [hbase] saintstack opened a new pull request #2198: HBASE-24817 Allow configuring WALEntry filters on ReplicationSource
saintstack opened a new pull request #2198: URL: https://github.com/apache/hbase/pull/2198 Allow specifying base WALEntry filter on construction of ReplicationSource. Add means of being able to filter WALs by name. hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java Add constructor that allows passing a predicate for filtering *in* WALs and a list of filters for filtering *out* WALEntries. The latter was hardcoded to filter out system-table WALEntries. The former did not exist but we'll need it if Replication takes in more than just the default Provider. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (HBASE-24817) Allow configuring WALEntry filters on ReplicationSource
Michael Stack created HBASE-24817: - Summary: Allow configuring WALEntry filters on ReplicationSource Key: HBASE-24817 URL: https://issues.apache.org/jira/browse/HBASE-24817 Project: HBase Issue Type: Sub-task Components: Replication, wal Affects Versions: 3.0.0-alpha-1, 2.4.0 Reporter: Michael Stack The parent issue is about enabling memstore replication of meta Regions. As-is, the ReplicationSource is hardcoded to filter out hbase:meta WALEntries; they are not forwarded for Replication. This issue is all internals making it so can create an instance of ReplicationSource with a different base set of WALEntry filters. We also add a means of filtering WALs by path name. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24749) Direct insert HFiles and Persist in-memory HFile tracking
[ https://issues.apache.org/jira/browse/HBASE-24749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171116#comment-17171116 ] Tak-Lon (Stephen) Wu commented on HBASE-24749: -- sorry for the delay, I was out few days last week. {quote}every flush and compaction will result in an update inline w/ the flush/compaction completion – if it fails, the flush/compaction fail? {quote} if updating hfile set in {{hbase:meta}} fails, it should be considered as failure if this feature is enabled. do you have concern on blocking the actual flush to be completed ? (it should be similar to other feature like {{hbase:quota}}) {quote}Master would update, or RS writes meta, a violation of a simplification we made trying to ensure one-writer {quote} for ensuring one-writer to {{hbase:meta}}, this is a good note and we haven't considered one writer scenario yet. I'm not sure the right way but for flush should be happened on the RS side, then either the RS create directly connection to {{hbase:meta}} with a limit on writing to only this column family outside of the master (suggested by [~zyork], pending investigation) or as you suggested that we package the hfile set information with a RPC call to Master and Master updates the hfile set. the amount of traffic (direct table connection or RPC call) should be the same, I still need to compare if the overhead (throughput) have any difference. In addition, I will try to come up a set of sub-tasks and update the proposal doc the coming week. please bear with me, the plan may have some transition tasks, e.g. 1. having the separate system table first, then have followup tasks to 2). compare the migration into the {{hbase:meta}} and actually 3). merge into {{hbase:meta}} (as a throughput sanity check) > Direct insert HFiles and Persist in-memory HFile tracking > - > > Key: HBASE-24749 > URL: https://issues.apache.org/jira/browse/HBASE-24749 > Project: HBase > Issue Type: Umbrella > Components: Compaction, HFile >Affects Versions: 3.0.0-alpha-1 >Reporter: Tak-Lon (Stephen) Wu >Assignee: Tak-Lon (Stephen) Wu >Priority: Major > Labels: design, discussion, objectstore, storeFile, storeengine > Attachments: 1B100m-25m25m-performance.pdf, Apache HBase - Direct > insert HFiles and Persist in-memory HFile tracking.pdf > > > We propose a new feature (a new store engine) to remove the {{.tmp}} > directory used in the commit stage for common HFile operations such as flush > and compaction to improve the write throughput and latency on object stores. > Specifically for S3 filesystems, this will also mitigate read-after-write > inconsistencies caused by immediate HFiles validation after moving the > HFile(s) to data directory. > Please see attached for this proposal and the initial result captured with > 25m (25m operations) and 1B (100m operations) YCSB workload A LOAD and RUN, > and workload C RUN result. > The goal of this JIRA is to discuss with the community if the proposed > improvement on the object stores use case makes senses and if we miss > anything should be included. > Improvement Highlights > 1. Lower write latency, especially the p99+ > 2. Higher write throughput on flush and compaction > 3. Lower MTTR on region (re)open or assignment > 4. Remove consistent check dependencies (e.g. DynamoDB) supported by file > system implementation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bharathv commented on a change in pull request #2197: HBASE-24807 Backport HBASE-20417 to branch-1
bharathv commented on a change in pull request #2197: URL: https://github.com/apache/hbase/pull/2197#discussion_r465296034 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReaderThread.java ## @@ -132,6 +135,10 @@ public void run() { try (WALEntryStream entryStream = new WALEntryStream(logQueue, fs, conf, lastReadPosition, metrics)) { while (isReaderRunning()) { // loop here to keep reusing stream while we can + if (!source.isPeerEnabled()) { Review comment: This is just a safeguard to prevent accumulation of batches right? No other implications of the patch that I can think of. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24672) HBase Shell Commands Survey
[ https://issues.apache.org/jira/browse/HBASE-24672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17171093#comment-17171093 ] Elliot Miller commented on HBASE-24672: --- I added one more column ("uses formatter') to the spreadsheet today to indicate which commands are using our custom formatter ({{::Shell::Formatter}}). I'm hoping to unify how all the commands are formatting their output. > HBase Shell Commands Survey > --- > > Key: HBASE-24672 > URL: https://issues.apache.org/jira/browse/HBASE-24672 > Project: HBase > Issue Type: Task >Affects Versions: 3.0.0-alpha-1, 2.3.0 >Reporter: Elliot Miller >Assignee: Elliot Miller >Priority: Minor > > I am going through all 163 commands in the hbase-shell module and checking a > few things: > * Functions as advertised > * Consistent naming, formatting, and help > * Return values > ** The majority of the commands still return nil. We can make the shell more > powerful by switching some of these commands to return Ruby objects. > h3. Acceptance Criteria > * The product of this ticket will be a *spreadsheet* with my comments on > each command and potentially a design doc or other Jira tickets with the > necessary changes found by the survey. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
Apache-HBase commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668783695 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 28s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 43s | master passed | | +1 :green_heart: | checkstyle | 1m 20s | master passed | | +1 :green_heart: | spotbugs | 2m 45s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 19s | the patch passed | | -0 :warning: | checkstyle | 1m 2s | hbase-server: The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 17s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 48s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 25s | The patch does not generate ASF License warnings. | | | | 35m 37s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2193 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 88ef5b9d1868 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2196: HBASE-24750 : All ExecutorService should use guava ThreadFactoryBuilder
Apache-HBase commented on pull request #2196: URL: https://github.com/apache/hbase/pull/2196#issuecomment-668782334 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 21s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 2s | master passed | | +1 :green_heart: | checkstyle | 4m 19s | master passed | | +1 :green_heart: | spotbugs | 9m 22s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 54s | the patch passed | | -0 :warning: | checkstyle | 0m 24s | hbase-common: The patch generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) | | -0 :warning: | checkstyle | 0m 30s | hbase-client: The patch generated 3 new + 1 unchanged - 0 fixed = 4 total (was 1) | | -0 :warning: | checkstyle | 0m 15s | hbase-zookeeper: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -0 :warning: | checkstyle | 0m 19s | hbase-procedure: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -0 :warning: | checkstyle | 1m 20s | hbase-server: The patch generated 24 new + 220 unchanged - 1 fixed = 244 total (was 221) | | -0 :warning: | checkstyle | 0m 49s | hbase-thrift: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -0 :warning: | checkstyle | 0m 16s | hbase-backup: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -0 :warning: | checkstyle | 0m 16s | hbase-examples: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 13m 4s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 9m 45s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 1m 31s | The patch does not generate ASF License warnings. | | | | 62m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2196 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux e6d82255df19 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-zookeeper.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-procedure.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-thrift.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-backup.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-examples.txt | | Max. process+thread count | 84 (vs. ulimit of 12500) | | modules | C: hbase-common hbase-client hbase-zookeeper hbase-procedure hbase-server hbase-thrift hbase-backup hbase-it hbase-examples U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2196
[GitHub] [hbase] Apache-HBase commented on pull request #2130: HBASE-24765: Dynamic master discovery
Apache-HBase commented on pull request #2130: URL: https://github.com/apache/hbase/pull/2130#issuecomment-668778886 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 42s | master passed | | +1 :green_heart: | compile | 2m 5s | master passed | | +1 :green_heart: | shadedjars | 5m 39s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 13s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 31s | the patch passed | | +1 :green_heart: | compile | 2m 5s | the patch passed | | +1 :green_heart: | javac | 2m 5s | the patch passed | | +1 :green_heart: | shadedjars | 5m 32s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 1m 11s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 45s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 3s | hbase-client in the patch passed. | | -1 :x: | unit | 159m 38s | hbase-server in the patch failed. | | | | 190m 16s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2130 | | JIRA Issue | HBASE-24765 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a850f338ec67 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 1.8.0_232 | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/testReport/ | | Max. process+thread count | 3679 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24816) Remove unused credential hbaseqa-at-asf-jira
[ https://issues.apache.org/jira/browse/HBASE-24816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-24816: - Summary: Remove unused credential hbaseqa-at-asf-jira (was: Fix pre-commit on branch-1) > Remove unused credential hbaseqa-at-asf-jira > > > Key: HBASE-24816 > URL: https://issues.apache.org/jira/browse/HBASE-24816 > Project: HBase > Issue Type: Bug > Components: build, tooling >Affects Versions: 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 1.7.0 > > > After move to ci-hadoop, branch-1 precommits are unable to checkout the > source tree. > https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/view/change-requests/job/PR-2194/1/console > {noformat} > 04:55:42 Running in > /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus > [Pipeline] { > [Pipeline] checkout > 04:55:42 No credentials specified > 04:55:42 Cloning the remote Git repository > > git rev-parse HEAD^{commit} # timeout=10 > > git config core.sparsecheckout # timeout=10 > > git checkout -f f9a427c6aff4f01bf06aa458a935249cfd6a5d30 # timeout=10 > 04:55:41 Cloning repository https://github.com/apache/yetus.git > 04:55:41 > git init > /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus # > timeout=10 > 04:55:41 Fetching upstream changes from https://github.com/apache/yetus.git > 04:55:41 > git --version # timeout=10 > 04:55:41 > git fetch --tags --progress -- > https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # > timeout=10 > 04:55:44 Checking out Revision 11add70671de39cd96b56e86e40c64c872b9282f > (rel/0.11.1) > 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git > # timeout=10 > 04:55:42 > git config --add remote.origin.fetch > +refs/heads/*:refs/remotes/origin/* # timeout=10 > 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git > # timeout=10 > 04:55:42 Fetching upstream changes from https://github.com/apache/yetus.git > 04:55:42 > git fetch --tags --progress -- > https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # > timeout=10 > 04:55:43 > git rev-parse rel/0.11.1^{commit} # timeout=10 > 04:55:43 > git rev-parse refs/remotes/origin/rel/0.11.1^{commit} # > timeout=10 > 04:55:44 Commit message: "YETUS-920. Stage version 0.11.1." > 04:55:44 First time build. Skipping changelog. > [Pipeline] } > [Pipeline] // dir > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (precommit-run) > [Pipeline] withCredentials > [Pipeline] // withCredentials > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (Declarative: Post Actions) > [Pipeline] script > [Pipeline] { > [Pipeline] step > 04:55:45 Archiving artifacts > [Pipeline] publishHTML > 04:55:45 [htmlpublisher] Archiving HTML reports... > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-24816) Fix pre-commit on branch-1
[ https://issues.apache.org/jira/browse/HBASE-24816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada resolved HBASE-24816. -- Fix Version/s: 1.7.0 Resolution: Fixed > Fix pre-commit on branch-1 > -- > > Key: HBASE-24816 > URL: https://issues.apache.org/jira/browse/HBASE-24816 > Project: HBase > Issue Type: Bug > Components: build, tooling >Affects Versions: 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > Fix For: 1.7.0 > > > After move to ci-hadoop, branch-1 precommits are unable to checkout the > source tree. > https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/view/change-requests/job/PR-2194/1/console > {noformat} > 04:55:42 Running in > /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus > [Pipeline] { > [Pipeline] checkout > 04:55:42 No credentials specified > 04:55:42 Cloning the remote Git repository > > git rev-parse HEAD^{commit} # timeout=10 > > git config core.sparsecheckout # timeout=10 > > git checkout -f f9a427c6aff4f01bf06aa458a935249cfd6a5d30 # timeout=10 > 04:55:41 Cloning repository https://github.com/apache/yetus.git > 04:55:41 > git init > /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus # > timeout=10 > 04:55:41 Fetching upstream changes from https://github.com/apache/yetus.git > 04:55:41 > git --version # timeout=10 > 04:55:41 > git fetch --tags --progress -- > https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # > timeout=10 > 04:55:44 Checking out Revision 11add70671de39cd96b56e86e40c64c872b9282f > (rel/0.11.1) > 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git > # timeout=10 > 04:55:42 > git config --add remote.origin.fetch > +refs/heads/*:refs/remotes/origin/* # timeout=10 > 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git > # timeout=10 > 04:55:42 Fetching upstream changes from https://github.com/apache/yetus.git > 04:55:42 > git fetch --tags --progress -- > https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # > timeout=10 > 04:55:43 > git rev-parse rel/0.11.1^{commit} # timeout=10 > 04:55:43 > git rev-parse refs/remotes/origin/rel/0.11.1^{commit} # > timeout=10 > 04:55:44 Commit message: "YETUS-920. Stage version 0.11.1." > 04:55:44 First time build. Skipping changelog. > [Pipeline] } > [Pipeline] // dir > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (precommit-run) > [Pipeline] withCredentials > [Pipeline] // withCredentials > [Pipeline] } > [Pipeline] // stage > [Pipeline] stage > [Pipeline] { (Declarative: Post Actions) > [Pipeline] script > [Pipeline] { > [Pipeline] step > 04:55:45 Archiving artifacts > [Pipeline] publishHTML > 04:55:45 [htmlpublisher] Archiving HTML reports... > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] bharathv merged pull request #2195: HBASE-24816: Remove unused credential hbaseqa-at-asf-jira
bharathv merged pull request #2195: URL: https://github.com/apache/hbase/pull/2195 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2130: HBASE-24765: Dynamic master discovery
Apache-HBase commented on pull request #2130: URL: https://github.com/apache/hbase/pull/2130#issuecomment-668775483 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 22s | master passed | | +1 :green_heart: | compile | 2m 39s | master passed | | +1 :green_heart: | shadedjars | 5m 54s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 28s | hbase-client in master failed. | | -0 :warning: | javadoc | 0m 41s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 12s | the patch passed | | +1 :green_heart: | compile | 2m 34s | the patch passed | | +1 :green_heart: | javac | 2m 34s | the patch passed | | +1 :green_heart: | shadedjars | 6m 11s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 26s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 50s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 19s | hbase-protocol-shaded in the patch passed. | | +1 :green_heart: | unit | 1m 20s | hbase-client in the patch passed. | | -1 :x: | unit | 147m 34s | hbase-server in the patch failed. | | | | 182m 32s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2130 | | JIRA Issue | HBASE-24765 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2a198205c2e8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/testReport/ | | Max. process+thread count | 3797 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
Apache-HBase commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668766824 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 49s | master passed | | +1 :green_heart: | compile | 1m 16s | master passed | | +1 :green_heart: | shadedjars | 5m 54s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 55s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 26s | the patch passed | | +1 :green_heart: | compile | 1m 13s | the patch passed | | +1 :green_heart: | javac | 1m 13s | the patch passed | | -1 :x: | shadedjars | 2m 38s | patch has 10 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 37s | hbase-server generated 1 new + 28 unchanged - 0 fixed = 29 total (was 28) | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 153m 49s | hbase-server in the patch failed. | | | | 178m 5s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2193 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux cb42875b271d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 1.8.0_232 | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/testReport/ | | Max. process+thread count | 4770 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] wchevreuil opened a new pull request #2197: HBASE-24807 Backport HBASE-20417 to branch-1
wchevreuil opened a new pull request #2197: URL: https://github.com/apache/hbase/pull/2197 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
Apache-HBase commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668759278 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 4s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 4m 6s | master passed | | +1 :green_heart: | compile | 1m 24s | master passed | | +1 :green_heart: | shadedjars | 5m 48s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 20s | hbase-hadoop-compat in master failed. | | -0 :warning: | javadoc | 0m 40s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 15s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 58s | the patch passed | | +1 :green_heart: | compile | 1m 23s | the patch passed | | +1 :green_heart: | javac | 1m 23s | the patch passed | | -1 :x: | shadedjars | 2m 32s | patch has 10 errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 19s | hbase-hadoop-compat in the patch failed. | | -0 :warning: | javadoc | 0m 41s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | hbase-hadoop-compat in the patch passed. | | -1 :x: | unit | 136m 34s | hbase-server in the patch failed. | | | | 161m 43s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2193 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux ad535a321ba1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | shadedjars | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/testReport/ | | Max. process+thread count | 4144 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work started] (HBASE-24750) All executor service should start using guava ThreadFactory
[ https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-24750 started by Viraj Jasani. > All executor service should start using guava ThreadFactory > --- > > Key: HBASE-24750 > URL: https://issues.apache.org/jira/browse/HBASE-24750 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > Currently, we have majority Executor services using guava's > ThreadFactoryBuilder while creating fixed size thread pool. There are some > executors using our internal hbase-common's Threads class which provides util > methods for creating thread factory. > Although there is no perf impact, we should let all Executors start using our > internal library for using ThreadFactory rather than having external guava > dependency (which is nothing more than a builder class). We might have to add > a couple more arguments to support full fledged ThreadFactory, but let's do > it and stop using guava's builder class. > *Update:* > Based on the consensus, we should use only guava library and retire our > internal code which maintains ThreadFactory creation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] virajjasani opened a new pull request #2196: HBASE-24750 : All ExecutorService should use guava ThreadFactoryBuilder
virajjasani opened a new pull request #2196: URL: https://github.com/apache/hbase/pull/2196 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24750) All executor service should start using guava ThreadFactory
[ https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HBASE-24750: - Description: Currently, we have majority Executor services using guava's ThreadFactoryBuilder while creating fixed size thread pool. There are some executors using our internal hbase-common's Threads class which provides util methods for creating thread factory. Although there is no perf impact, we should let all Executors start using our internal library for using ThreadFactory rather than having external guava dependency (which is nothing more than a builder class). We might have to add a couple more arguments to support full fledged ThreadFactory, but let's do it and stop using guava's builder class. Update: Based on the consensus, we should use only guava library and retire our internal code which maintains ThreadFactory creation. was: Currently, we have majority Executor services using guava's ThreadFactoryBuilder while creating fixed size thread pool. There are some executors using our internal hbase-common's Threads class which provides util methods for creating thread factory. Although there is no perf impact, we should let all Executors start using our internal library for using ThreadFactory rather than having external guava dependency (which is nothing more than a builder class). We might have to add a couple more arguments to support full fledged ThreadFactory, but let's do it and stop using guava's builder class. > All executor service should start using guava ThreadFactory > --- > > Key: HBASE-24750 > URL: https://issues.apache.org/jira/browse/HBASE-24750 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > Currently, we have majority Executor services using guava's > ThreadFactoryBuilder while creating fixed size thread pool. There are some > executors using our internal hbase-common's Threads class which provides util > methods for creating thread factory. > Although there is no perf impact, we should let all Executors start using our > internal library for using ThreadFactory rather than having external guava > dependency (which is nothing more than a builder class). We might have to add > a couple more arguments to support full fledged ThreadFactory, but let's do > it and stop using guava's builder class. > Update: > Based on the consensus, we should use only guava library and retire our > internal code which maintains ThreadFactory creation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24750) All executor service should start using guava ThreadFactory
[ https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HBASE-24750: - Description: Currently, we have majority Executor services using guava's ThreadFactoryBuilder while creating fixed size thread pool. There are some executors using our internal hbase-common's Threads class which provides util methods for creating thread factory. Although there is no perf impact, we should let all Executors start using our internal library for using ThreadFactory rather than having external guava dependency (which is nothing more than a builder class). We might have to add a couple more arguments to support full fledged ThreadFactory, but let's do it and stop using guava's builder class. *Update:* Based on the consensus, we should use only guava library and retire our internal code which maintains ThreadFactory creation. was: Currently, we have majority Executor services using guava's ThreadFactoryBuilder while creating fixed size thread pool. There are some executors using our internal hbase-common's Threads class which provides util methods for creating thread factory. Although there is no perf impact, we should let all Executors start using our internal library for using ThreadFactory rather than having external guava dependency (which is nothing more than a builder class). We might have to add a couple more arguments to support full fledged ThreadFactory, but let's do it and stop using guava's builder class. Update: Based on the consensus, we should use only guava library and retire our internal code which maintains ThreadFactory creation. > All executor service should start using guava ThreadFactory > --- > > Key: HBASE-24750 > URL: https://issues.apache.org/jira/browse/HBASE-24750 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > Currently, we have majority Executor services using guava's > ThreadFactoryBuilder while creating fixed size thread pool. There are some > executors using our internal hbase-common's Threads class which provides util > methods for creating thread factory. > Although there is no perf impact, we should let all Executors start using our > internal library for using ThreadFactory rather than having external guava > dependency (which is nothing more than a builder class). We might have to add > a couple more arguments to support full fledged ThreadFactory, but let's do > it and stop using guava's builder class. > *Update:* > Based on the consensus, we should use only guava library and retire our > internal code which maintains ThreadFactory creation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24750) All executor service should start using guava ThreadFactory
[ https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HBASE-24750: - Summary: All executor service should start using guava ThreadFactory (was: All executor service should start using our internal ThreadFactory) > All executor service should start using guava ThreadFactory > --- > > Key: HBASE-24750 > URL: https://issues.apache.org/jira/browse/HBASE-24750 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > Currently, we have majority Executor services using guava's > ThreadFactoryBuilder while creating fixed size thread pool. There are some > executors using our internal hbase-common's Threads class which provides util > methods for creating thread factory. > Although there is no perf impact, we should let all Executors start using our > internal library for using ThreadFactory rather than having external guava > dependency (which is nothing more than a builder class). We might have to add > a couple more arguments to support full fledged ThreadFactory, but let's do > it and stop using guava's builder class. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-24750) All executor service should start using our internal ThreadFactory
[ https://issues.apache.org/jira/browse/HBASE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HBASE-24750: Assignee: Viraj Jasani > All executor service should start using our internal ThreadFactory > -- > > Key: HBASE-24750 > URL: https://issues.apache.org/jira/browse/HBASE-24750 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > > Currently, we have majority Executor services using guava's > ThreadFactoryBuilder while creating fixed size thread pool. There are some > executors using our internal hbase-common's Threads class which provides util > methods for creating thread factory. > Although there is no perf impact, we should let all Executors start using our > internal library for using ThreadFactory rather than having external guava > dependency (which is nothing more than a builder class). We might have to add > a couple more arguments to support full fledged ThreadFactory, but let's do > it and stop using guava's builder class. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] joshelser commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668731763 Thanks folks! Just pushed a couple more commits for cleanup on QA and wellington's suggestions. I'll merge when QA is happy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465218113 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java ## @@ -314,4 +314,15 @@ public String getMetricsName() { @Override public long getEditsFiltered() { return this.walEditsFilteredCounter.value(); } + + @Override + public void setWALReaderEditsBufferBytes(long usage) { +//noop. Global limit, tracked globally. Do not need per-source metrics Review comment: yup! Just made another interface to isolate the new additions which cleans this up a little. I think your suggestion is still good for better, fine-grained tracking. However, since HBASE-20417, hopefully no one else runs into this ;) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
bharathv commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465216898 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java ## @@ -314,4 +314,15 @@ public String getMetricsName() { @Override public long getEditsFiltered() { return this.walEditsFilteredCounter.value(); } + + @Override + public void setWALReaderEditsBufferBytes(long usage) { +//noop. Global limit, tracked globally. Do not need per-source metrics Review comment: Oh, I see. Looks like you pushed the metrics update logic into the source, the patch looks clean now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465212657 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReader.java ## @@ -276,6 +274,8 @@ public Path getCurrentPath() { private boolean checkQuota() { // try not to go over total quota if (totalBufferUsed.get() > totalBufferQuota) { + LOG.warn("Can't read more edits from WAL as buffer usage {}B exceeds limit {}B", Review comment: Sure, that's easy to add. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2195: HBASE-24816: Remove unused credential hbaseqa-at-asf-jira
Apache-HBase commented on pull request #2195: URL: https://github.com/apache/hbase/pull/2195#issuecomment-668723359 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 6m 37s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | shelldocs | 0m 0s | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ branch-1 Compile Tests _ | | +0 :ok: | mvndep | 2m 25s | Maven dependency ordering for branch | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 9s | Maven dependency ordering for patch | | +1 :green_heart: | shellcheck | 0m 0s | There were no new shellcheck issues. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | ||| _ Other Tests _ | | +0 :ok: | asflicense | 0m 0s | ASF License check generated no output? | | | | 10m 21s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2195/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2195 | | JIRA Issue | HBASE-24816 | | Optional Tests | dupname asflicense shellcheck shelldocs | | uname | Linux 3ad2cbccf419 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2195/out/precommit/personality/provided.sh | | git revision | branch-1 / af18670 | | Max. process+thread count | 44 (vs. ulimit of 1) | | modules | C: U: | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2195/1/console | | versions | git=1.9.1 maven=3.0.5 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465204713 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java ## @@ -1,253 +1,19 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - package org.apache.hadoop.hbase.replication.regionserver; -import org.apache.hadoop.metrics2.lib.MutableFastCounter; -import org.apache.hadoop.metrics2.lib.MutableGaugeLong; -import org.apache.hadoop.metrics2.lib.MutableHistogram; import org.apache.yetus.audience.InterfaceAudience; @InterfaceAudience.Private -public class MetricsReplicationGlobalSourceSource implements MetricsReplicationSourceSource{ - private static final String KEY_PREFIX = "source."; - - private final MetricsReplicationSourceImpl rms; - - private final MutableHistogram ageOfLastShippedOpHist; - private final MutableGaugeLong sizeOfLogQueueGauge; - private final MutableFastCounter logReadInEditsCounter; - private final MutableFastCounter walEditsFilteredCounter; - private final MutableFastCounter shippedBatchesCounter; - private final MutableFastCounter shippedOpsCounter; - private final MutableFastCounter shippedBytesCounter; - private final MutableFastCounter logReadInBytesCounter; - private final MutableFastCounter shippedHFilesCounter; - private final MutableGaugeLong sizeOfHFileRefsQueueGauge; - private final MutableFastCounter unknownFileLengthForClosedWAL; - private final MutableFastCounter uncleanlyClosedWAL; - private final MutableFastCounter uncleanlyClosedSkippedBytes; - private final MutableFastCounter restartWALReading; - private final MutableFastCounter repeatedFileBytes; - private final MutableFastCounter completedWAL; - private final MutableFastCounter completedRecoveryQueue; - private final MutableFastCounter failedRecoveryQueue; - - public MetricsReplicationGlobalSourceSource(MetricsReplicationSourceImpl rms) { -this.rms = rms; - -ageOfLastShippedOpHist = rms.getMetricsRegistry().getHistogram(SOURCE_AGE_OF_LAST_SHIPPED_OP); - -sizeOfLogQueueGauge = rms.getMetricsRegistry().getGauge(SOURCE_SIZE_OF_LOG_QUEUE, 0L); - -shippedBatchesCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_BATCHES, 0L); - -shippedOpsCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_OPS, 0L); - -shippedBytesCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_BYTES, 0L); - -logReadInBytesCounter = rms.getMetricsRegistry().getCounter(SOURCE_LOG_READ_IN_BYTES, 0L); - -logReadInEditsCounter = rms.getMetricsRegistry().getCounter(SOURCE_LOG_READ_IN_EDITS, 0L); - -walEditsFilteredCounter = rms.getMetricsRegistry().getCounter(SOURCE_LOG_EDITS_FILTERED, 0L); - -shippedHFilesCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_HFILES, 0L); - -sizeOfHFileRefsQueueGauge = -rms.getMetricsRegistry().getGauge(SOURCE_SIZE_OF_HFILE_REFS_QUEUE, 0L); - -unknownFileLengthForClosedWAL = rms.getMetricsRegistry() -.getCounter(SOURCE_CLOSED_LOGS_WITH_UNKNOWN_LENGTH, 0L); -uncleanlyClosedWAL = rms.getMetricsRegistry().getCounter(SOURCE_UNCLEANLY_CLOSED_LOGS, 0L); -uncleanlyClosedSkippedBytes = rms.getMetricsRegistry() -.getCounter(SOURCE_UNCLEANLY_CLOSED_IGNORED_IN_BYTES, 0L); -restartWALReading = rms.getMetricsRegistry().getCounter(SOURCE_RESTARTED_LOG_READING, 0L); -repeatedFileBytes = rms.getMetricsRegistry().getCounter(SOURCE_REPEATED_LOG_FILE_BYTES, 0L); -completedWAL = rms.getMetricsRegistry().getCounter(SOURCE_COMPLETED_LOGS, 0L); -completedRecoveryQueue = rms.getMetricsRegistry() -.getCounter(SOURCE_COMPLETED_RECOVERY_QUEUES, 0L); -failedRecoveryQueue = rms.getMetricsRegistry() -.getCounter(SOURCE_FAILED_RECOVERY_QUEUES, 0L); - } - - @Override public void setLastShippedAge(long age) { -ageOfLastShippedOpHist.add(age); - } - - @Override public void incrSizeOfLogQueue(int size) { -sizeOfLogQueueGauge.incr(size); - } - - @Override public void decrSizeOfLogQueue(int size) { -sizeOfLogQueueGauge.decr(size); - } - - @Override publi
[GitHub] [hbase] taklwu commented on a change in pull request #2113: HBASE-24286: HMaster won't become healthy after after cloning or crea…
taklwu commented on a change in pull request #2113: URL: https://github.com/apache/hbase/pull/2113#discussion_r465204694 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java ## @@ -71,7 +71,11 @@ private static void writeFsLayout(Path rootDir, Configuration conf) throws IOExc LOG.info("BOOTSTRAP: creating hbase:meta region"); FileSystem fs = rootDir.getFileSystem(conf); Path tableDir = CommonFSUtils.getTableDir(rootDir, TableName.META_TABLE_NAME); -if (fs.exists(tableDir) && !fs.delete(tableDir, true)) { +boolean removeMeta = conf.getBoolean(HConstants.REMOVE_META_ON_RESTART, Review comment: sounds right to me, as you suggested we put this PR on-held and depends on the new sub-task. I will try to send another JIRA and PR out in a few days and refer to the conversation we discussed here. Thanks again Anoop This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv opened a new pull request #2195: HBASE-24816: Remove unused credential hbaseqa-at-asf-jira
bharathv opened a new pull request #2195: URL: https://github.com/apache/hbase/pull/2195 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2130: HBASE-24765: Dynamic master discovery
Apache-HBase commented on pull request #2130: URL: https://github.com/apache/hbase/pull/2130#issuecomment-668711613 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | prototool | 0m 0s | prototool was not available. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 37s | master passed | | +1 :green_heart: | checkstyle | 1m 43s | master passed | | +1 :green_heart: | spotbugs | 6m 7s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 13s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 24s | the patch passed | | -0 :warning: | checkstyle | 0m 27s | hbase-client: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 10s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | hbaseprotoc | 1m 59s | the patch passed | | +1 :green_heart: | spotbugs | 6m 38s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 39s | The patch does not generate ASF License warnings. | | | | 46m 35s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2130 | | JIRA Issue | HBASE-24765 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle cc hbaseprotoc prototool | | uname | Linux e9182e365d47 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-client.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-protocol-shaded hbase-client hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2130/3/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465189517 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java ## @@ -1,253 +1,19 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - Review comment: oops! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] busbey commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
busbey commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465182276 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java ## @@ -1,253 +1,19 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - Review comment: gotta put this back. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24816) Fix pre-commit on branch-1
[ https://issues.apache.org/jira/browse/HBASE-24816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-24816: - Description: After move to ci-hadoop, branch-1 precommits are unable to checkout the source tree. https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/view/change-requests/job/PR-2194/1/console {noformat} 04:55:42 Running in /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus [Pipeline] { [Pipeline] checkout 04:55:42 No credentials specified 04:55:42 Cloning the remote Git repository > git rev-parse HEAD^{commit} # timeout=10 > git config core.sparsecheckout # timeout=10 > git checkout -f f9a427c6aff4f01bf06aa458a935249cfd6a5d30 # timeout=10 04:55:41 Cloning repository https://github.com/apache/yetus.git 04:55:41 > git init /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus # timeout=10 04:55:41 Fetching upstream changes from https://github.com/apache/yetus.git 04:55:41 > git --version # timeout=10 04:55:41 > git fetch --tags --progress -- https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # timeout=10 04:55:44 Checking out Revision 11add70671de39cd96b56e86e40c64c872b9282f (rel/0.11.1) 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git # timeout=10 04:55:42 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git # timeout=10 04:55:42 Fetching upstream changes from https://github.com/apache/yetus.git 04:55:42 > git fetch --tags --progress -- https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # timeout=10 04:55:43 > git rev-parse rel/0.11.1^{commit} # timeout=10 04:55:43 > git rev-parse refs/remotes/origin/rel/0.11.1^{commit} # timeout=10 04:55:44 Commit message: "YETUS-920. Stage version 0.11.1." 04:55:44 First time build. Skipping changelog. [Pipeline] } [Pipeline] // dir [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (precommit-run) [Pipeline] withCredentials [Pipeline] // withCredentials [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Declarative: Post Actions) [Pipeline] script [Pipeline] { [Pipeline] step 04:55:45 Archiving artifacts [Pipeline] publishHTML 04:55:45 [htmlpublisher] Archiving HTML reports... {noformat} was:After move to ci-hadoop, branch-1 precommits are unable to checkout the source tree. > Fix pre-commit on branch-1 > -- > > Key: HBASE-24816 > URL: https://issues.apache.org/jira/browse/HBASE-24816 > Project: HBase > Issue Type: Bug > Components: build, tooling >Affects Versions: 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > > After move to ci-hadoop, branch-1 precommits are unable to checkout the > source tree. > https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/view/change-requests/job/PR-2194/1/console > {noformat} > 04:55:42 Running in > /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus > [Pipeline] { > [Pipeline] checkout > 04:55:42 No credentials specified > 04:55:42 Cloning the remote Git repository > > git rev-parse HEAD^{commit} # timeout=10 > > git config core.sparsecheckout # timeout=10 > > git checkout -f f9a427c6aff4f01bf06aa458a935249cfd6a5d30 # timeout=10 > 04:55:41 Cloning repository https://github.com/apache/yetus.git > 04:55:41 > git init > /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2194/yetus # > timeout=10 > 04:55:41 Fetching upstream changes from https://github.com/apache/yetus.git > 04:55:41 > git --version # timeout=10 > 04:55:41 > git fetch --tags --progress -- > https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # > timeout=10 > 04:55:44 Checking out Revision 11add70671de39cd96b56e86e40c64c872b9282f > (rel/0.11.1) > 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git > # timeout=10 > 04:55:42 > git config --add remote.origin.fetch > +refs/heads/*:refs/remotes/origin/* # timeout=10 > 04:55:42 > git config remote.origin.url https://github.com/apache/yetus.git > # timeout=10 > 04:55:42 Fetching upstream changes from https://github.com/apache/yetus.git > 04:55:42 > git fetch --tags --progress -- > https://github.com/apache/yetus.git +refs/heads/*:refs/remotes/origin/* # > timeout=10 > 04:55:43 > git rev-parse rel/0.11.1^{commit} # timeout=10 > 04:55:43 > git rev-parse refs/remotes/origin/rel/0.11.1^{commit} # > timeout=10 > 04:55:44 Commit message: "YETUS-920. Stage version 0.11.1." > 04:55:44 First time build. Skipping changelog. > [Pipeline] } > [Pipeline] // dir > [Pipeline] } > [Pipeline] // stage > [Pipeline]
[GitHub] [hbase] Apache-HBase commented on pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
Apache-HBase commented on pull request #2193: URL: https://github.com/apache/hbase/pull/2193#issuecomment-668698173 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 3m 36s | master passed | | +1 :green_heart: | checkstyle | 1m 18s | master passed | | +1 :green_heart: | spotbugs | 2m 27s | master passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 14s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 22s | the patch passed | | -0 :warning: | checkstyle | 0m 13s | hbase-hadoop-compat: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | -0 :warning: | checkstyle | 1m 3s | hbase-server: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 15s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 45s | the patch passed | ||| _ Other Tests _ | | -1 :x: | asflicense | 0m 26s | The patch generated 1 ASF License warnings. | | | | 35m 3s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2193 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 815f6c25f8a8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-hadoop-compat.txt | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | asflicense | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/artifact/yetus-general-check/output/patch-asflicense-problems.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-server U: . | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2193/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24816) Fix pre-commit on branch-1
[ https://issues.apache.org/jira/browse/HBASE-24816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-24816: - Description: After move to ci-hadoop, branch-1 precommits are unable to checkout the source tree. (was: After move to ci-hadoop, branch-1 precommits are unable to checkout the source tree. This should be similar to HBASE-24812.) > Fix pre-commit on branch-1 > -- > > Key: HBASE-24816 > URL: https://issues.apache.org/jira/browse/HBASE-24816 > Project: HBase > Issue Type: Bug > Components: build, tooling >Affects Versions: 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > > After move to ci-hadoop, branch-1 precommits are unable to checkout the > source tree. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] wchevreuil commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
wchevreuil commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465159767 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationGlobalSourceSource.java ## @@ -1,253 +1,19 @@ -/** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - package org.apache.hadoop.hbase.replication.regionserver; -import org.apache.hadoop.metrics2.lib.MutableFastCounter; -import org.apache.hadoop.metrics2.lib.MutableGaugeLong; -import org.apache.hadoop.metrics2.lib.MutableHistogram; import org.apache.yetus.audience.InterfaceAudience; @InterfaceAudience.Private -public class MetricsReplicationGlobalSourceSource implements MetricsReplicationSourceSource{ - private static final String KEY_PREFIX = "source."; - - private final MetricsReplicationSourceImpl rms; - - private final MutableHistogram ageOfLastShippedOpHist; - private final MutableGaugeLong sizeOfLogQueueGauge; - private final MutableFastCounter logReadInEditsCounter; - private final MutableFastCounter walEditsFilteredCounter; - private final MutableFastCounter shippedBatchesCounter; - private final MutableFastCounter shippedOpsCounter; - private final MutableFastCounter shippedBytesCounter; - private final MutableFastCounter logReadInBytesCounter; - private final MutableFastCounter shippedHFilesCounter; - private final MutableGaugeLong sizeOfHFileRefsQueueGauge; - private final MutableFastCounter unknownFileLengthForClosedWAL; - private final MutableFastCounter uncleanlyClosedWAL; - private final MutableFastCounter uncleanlyClosedSkippedBytes; - private final MutableFastCounter restartWALReading; - private final MutableFastCounter repeatedFileBytes; - private final MutableFastCounter completedWAL; - private final MutableFastCounter completedRecoveryQueue; - private final MutableFastCounter failedRecoveryQueue; - - public MetricsReplicationGlobalSourceSource(MetricsReplicationSourceImpl rms) { -this.rms = rms; - -ageOfLastShippedOpHist = rms.getMetricsRegistry().getHistogram(SOURCE_AGE_OF_LAST_SHIPPED_OP); - -sizeOfLogQueueGauge = rms.getMetricsRegistry().getGauge(SOURCE_SIZE_OF_LOG_QUEUE, 0L); - -shippedBatchesCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_BATCHES, 0L); - -shippedOpsCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_OPS, 0L); - -shippedBytesCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_BYTES, 0L); - -logReadInBytesCounter = rms.getMetricsRegistry().getCounter(SOURCE_LOG_READ_IN_BYTES, 0L); - -logReadInEditsCounter = rms.getMetricsRegistry().getCounter(SOURCE_LOG_READ_IN_EDITS, 0L); - -walEditsFilteredCounter = rms.getMetricsRegistry().getCounter(SOURCE_LOG_EDITS_FILTERED, 0L); - -shippedHFilesCounter = rms.getMetricsRegistry().getCounter(SOURCE_SHIPPED_HFILES, 0L); - -sizeOfHFileRefsQueueGauge = -rms.getMetricsRegistry().getGauge(SOURCE_SIZE_OF_HFILE_REFS_QUEUE, 0L); - -unknownFileLengthForClosedWAL = rms.getMetricsRegistry() -.getCounter(SOURCE_CLOSED_LOGS_WITH_UNKNOWN_LENGTH, 0L); -uncleanlyClosedWAL = rms.getMetricsRegistry().getCounter(SOURCE_UNCLEANLY_CLOSED_LOGS, 0L); -uncleanlyClosedSkippedBytes = rms.getMetricsRegistry() -.getCounter(SOURCE_UNCLEANLY_CLOSED_IGNORED_IN_BYTES, 0L); -restartWALReading = rms.getMetricsRegistry().getCounter(SOURCE_RESTARTED_LOG_READING, 0L); -repeatedFileBytes = rms.getMetricsRegistry().getCounter(SOURCE_REPEATED_LOG_FILE_BYTES, 0L); -completedWAL = rms.getMetricsRegistry().getCounter(SOURCE_COMPLETED_LOGS, 0L); -completedRecoveryQueue = rms.getMetricsRegistry() -.getCounter(SOURCE_COMPLETED_RECOVERY_QUEUES, 0L); -failedRecoveryQueue = rms.getMetricsRegistry() -.getCounter(SOURCE_FAILED_RECOVERY_QUEUES, 0L); - } - - @Override public void setLastShippedAge(long age) { -ageOfLastShippedOpHist.add(age); - } - - @Override public void incrSizeOfLogQueue(int size) { -sizeOfLogQueueGauge.incr(size); - } - - @Override public void decrSizeOfLogQueue(int size) { -sizeOfLogQueueGauge.decr(size); - } - - @Override publ
[jira] [Updated] (HBASE-24816) Fix pre-commit on branch-1
[ https://issues.apache.org/jira/browse/HBASE-24816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-24816: - Affects Version/s: 1.7.0 > Fix pre-commit on branch-1 > -- > > Key: HBASE-24816 > URL: https://issues.apache.org/jira/browse/HBASE-24816 > Project: HBase > Issue Type: Bug >Affects Versions: 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > > After move to ci-hadoop, branch-1 precommits are unable to checkout the > source tree. This should be similar to HBASE-24812. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24805) HBaseTestingUtility.getConnection should be threadsafe
[ https://issues.apache.org/jira/browse/HBASE-24805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-24805: Fix Version/s: 2.4.0 1.7.0 3.0.0-alpha-1 Hadoop Flags: Incompatible change Release Note: Users of `HBaseTestingUtility` can now safely call the `getConnection` method from multiple threads. As a consequence of refactoring to improve the thread safety of the HBase testing classes, the protected `conf` member of the `HBaseCommonTestingUtility` class has been marked final. Downstream users who extend from the class hierarchy rooted at this class will need to pass the Configuration instance they want used to their super constructor rather than overwriting the instance variable. Resolution: Fixed Status: Resolved (was: Patch Available) > HBaseTestingUtility.getConnection should be threadsafe > -- > > Key: HBASE-24805 > URL: https://issues.apache.org/jira/browse/HBASE-24805 > Project: HBase > Issue Type: Bug > Components: test >Reporter: Sean Busbey >Assignee: Sean Busbey >Priority: Major > Fix For: 3.0.0-alpha-1, 1.7.0, 2.4.0 > > > the current javadoc for getConnection carries a thread safety warning: > {code} > /** > * Get a Connection to the cluster. Not thread-safe (This class needs a > lot of work to make it > * thread-safe). > * @return A Connection that can be shared. Don't close. Will be closed on > shutdown of cluster. > */ >public Connection getConnection() throws IOException { > {code} > We then ignore that warning across our test base. We should make the method > threadsafe since the intention is to share a single Connection across all > users of the HTU instance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24816) Fix pre-commit on branch-1
[ https://issues.apache.org/jira/browse/HBASE-24816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharath Vissapragada updated HBASE-24816: - Component/s: tooling build > Fix pre-commit on branch-1 > -- > > Key: HBASE-24816 > URL: https://issues.apache.org/jira/browse/HBASE-24816 > Project: HBase > Issue Type: Bug > Components: build, tooling >Affects Versions: 1.7.0 >Reporter: Bharath Vissapragada >Assignee: Bharath Vissapragada >Priority: Major > > After move to ci-hadoop, branch-1 precommits are unable to checkout the > source tree. This should be similar to HBASE-24812. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-24816) Fix pre-commit on branch-1
Bharath Vissapragada created HBASE-24816: Summary: Fix pre-commit on branch-1 Key: HBASE-24816 URL: https://issues.apache.org/jira/browse/HBASE-24816 Project: HBase Issue Type: Bug Reporter: Bharath Vissapragada Assignee: Bharath Vissapragada After move to ci-hadoop, branch-1 precommits are unable to checkout the source tree. This should be similar to HBASE-24812. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] busbey closed pull request #2188: HBASE-24805 HBaseTestingUtility.getConnection should be threadsafe (branch-1)
busbey closed pull request #2188: URL: https://github.com/apache/hbase/pull/2188 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #2130: HBASE-24765: Dynamic master discovery
bharathv commented on a change in pull request #2130: URL: https://github.com/apache/hbase/pull/2130#discussion_r464801901 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java ## @@ -2931,6 +2935,27 @@ public GetActiveMasterResponse getActiveMaster(RpcController rpcController, return resp.build(); } + @Override + public GetMastersResponse getMasters(RpcController rpcController, GetMastersRequest request) + throws ServiceException { +GetMastersResponse.Builder resp = GetMastersResponse.newBuilder(); +// Active master +Optional serverName = master.getActiveMaster(); +serverName.ifPresent(name -> resp.addMasterServers(GetMastersResponseEntry.newBuilder() + .setServerName(ProtobufUtil.toServerName(name)).setIsActive(true).build())); +// Backup masters +try { + // TODO: Cache the backup masters to avoid a ZK RPC for each getMasters() call. Review comment: Right. ## File path: hbase-client/src/test/java/org/apache/hadoop/hbase/client/TestMasterRegistryHedgedReads.java ## @@ -121,6 +121,11 @@ public boolean hasCellBlockSupport() { @Override public void callMethod(MethodDescriptor method, RpcController controller, Message request, Message responsePrototype, RpcCallback done) { + if (!method.getName().equals("GetClusterId")) { +// Master registry internally runs other RPCs to keep the master list up to date. This check Review comment: Will add more detail. That is needed because of the way the test is written. This RpcChannel implementation intercepts all the mock RPCs from unit tests and the just counts the getClusterId calls (depending on the index).. With the patch a single GetClusterID() RPC failure can trigger an extra getMasters() call and that is accounted too. ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterRegistry.java ## @@ -115,20 +129,50 @@ MasterRegistry(Configuration conf) throws IOException { this.hedgedReadFanOut = Math.max(1, conf.getInt(MASTER_REGISTRY_HEDGED_REQS_FANOUT_KEY, MASTER_REGISTRY_HEDGED_REQS_FANOUT_DEFAULT)); -int rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE, +rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE, conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, HConstants.DEFAULT_HBASE_RPC_TIMEOUT)); // XXX: we pass cluster id as null here since we do not have a cluster id yet, we have to fetch // this through the master registry... // This is a problem as we will use the cluster id to determine the authentication method rpcClient = RpcClientFactory.createClient(conf, null); rpcControllerFactory = RpcControllerFactory.instantiate(conf); -Set masterAddrs = parseMasterAddrs(conf); +// Generate the seed list of master stubs. Subsequent RPCs try to keep a live list of masters +// by fetching the end points from this list. +populateMasterStubs(parseMasterAddrs(conf)); +Runnable masterEndPointRefresher = () -> { + while (!Thread.interrupted()) { +try { + // Spurious wake ups are okay, worst case we make an extra RPC call to refresh. We won't + // have duplicate refreshes because once the thread is past the wait(), notify()s are + // ignored until the thread is back to the waiting state. + synchronized (refreshMasters) { +refreshMasters.wait(WAIT_TIME_OUT_MS); + } + LOG.debug("Attempting to refresh master address end points."); + Set newMasters = new HashSet<>(getMasters().get()); + populateMasterStubs(newMasters); + LOG.debug("Finished refreshing master end points. {}", newMasters); +} catch (InterruptedException e) { + LOG.debug("Interrupted during wait, aborting refresh-masters-thread.", e); + break; +} catch (ExecutionException | IOException e) { + LOG.debug("Error populating latest list of masters.", e); +} + } +}; +masterAddrRefresherThread = Threads.newDaemonThreadFactory( +"MasterRegistry refresh end-points").newThread(masterEndPointRefresher); +masterAddrRefresherThread.start(); Review comment: Ok switched. I didn't want to have extra layers on top of a simple thread, but I guess a pool is more readable. ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/MasterRegistry.java ## @@ -115,20 +129,50 @@ MasterRegistry(Configuration conf) throws IOException { this.hedgedReadFanOut = Math.max(1, conf.getInt(MASTER_REGISTRY_HEDGED_REQS_FANOUT_KEY, MASTER_REGISTRY_HEDGED_REQS_FANOUT_DEFAULT)); -int rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE, +rpcTimeoutMs = (int) Math.min(Integer.MAX_VALUE, conf.getLong(HConstants.HBASE_RPC_TIMEOUT_KEY, HConstants.DEFAULT_HBASE_RPC_TIMEOUT)); // XXX: we pass cluster id as null
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465154338 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java ## @@ -244,17 +248,22 @@ void addHFileRefsToQueue(TableName tableName, byte[] family, List
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465154178 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java ## @@ -314,4 +314,15 @@ public String getMetricsName() { @Override public long getEditsFiltered() { return this.walEditsFilteredCounter.value(); } + + @Override + public void setWALReaderEditsBufferBytes(long usage) { +//noop. Global limit, tracked globally. Do not need per-source metrics Review comment: > looks like ReplicationSource class has access to the MetricsSource object.. we can just update the byte usage for that source? (and the global too at the same time). That way we can also get rid of the special logic to update one metric setWALReaderEditsBufferBytes() You are correct that we could do that. I wanted to keep this change scoped on "make what we currently have reportable". I am all for doing a per-source tracking in addition to the globally-scoped tracking. I'd rather just keep these two things separate :) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] bharathv commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
bharathv commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465146006 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java ## @@ -314,4 +314,15 @@ public String getMetricsName() { @Override public long getEditsFiltered() { return this.walEditsFilteredCounter.value(); } + + @Override + public void setWALReaderEditsBufferBytes(long usage) { +//noop. Global limit, tracked globally. Do not need per-source metrics Review comment: > Once we chuck something into this usage, we have zero insight back to which source put it there. I had the same question. I was wondering if a drill down by source would be helpful in addition to the global usage. Correct me if I'm wrong, looks like ReplicationSource class has access to the MetricsSource object.. we can just update the byte usage for that source? (and the global too at the same time). That way we can also get rid of the special logic to update one metric setWALReaderEditsBufferBytes(). Does that not work for some reason? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2187: HBASE-24665 MultiWAL : Avoid rolling of ALL WALs when one of the WAL needs a roll
Apache-HBase commented on pull request #2187: URL: https://github.com/apache/hbase/pull/2187#issuecomment-668667329 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 2m 3s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ branch-2.2 Compile Tests _ | | +1 :green_heart: | mvninstall | 5m 16s | branch-2.2 passed | | +1 :green_heart: | compile | 0m 58s | branch-2.2 passed | | +1 :green_heart: | checkstyle | 1m 21s | branch-2.2 passed | | +1 :green_heart: | shadedjars | 4m 1s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 38s | branch-2.2 passed | | +0 :ok: | spotbugs | 3m 23s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 22s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 45s | the patch passed | | +1 :green_heart: | compile | 0m 53s | the patch passed | | +1 :green_heart: | javac | 0m 53s | the patch passed | | -1 :x: | checkstyle | 1m 20s | hbase-server: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedjars | 4m 0s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | hadoopcheck | 25m 9s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 2.10.0 or 3.1.2 3.2.1. | | +1 :green_heart: | javadoc | 0m 35s | the patch passed | | +1 :green_heart: | findbugs | 3m 27s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 167m 26s | hbase-server in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 232m 2s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2187/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2187 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux a12c005399ac 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-home/workspace/Base-PreCommit-GitHub-PR_PR-2187/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 363a31a5b3 | | Default Java | 1.8.0_181 | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2187/4/artifact/out/diff-checkstyle-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2187/4/testReport/ | | Max. process+thread count | 4336 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2187/4/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465118725 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSource.java ## @@ -76,4 +77,13 @@ long getWALEditsRead(); long getShippedOps(); long getEditsFiltered(); + /** + * Sets the total usage of memory used by edits in memory read from WALs. + * @param usage The memory used by edits in bytes + */ + void setWALReaderEditsBufferBytes(long usage); Review comment: Yeah, so the reason this was done this was is that we had no interface for global metrics, just a second implementation of the per-source interface. Stubbed in a new interface so that we can have "global-metrics-only" methods that don't pollute per-source metrics. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24808) skip empty log cleaner delegate class names (WAS => cleaner.CleanerChore: Can NOT create CleanerDelegate= ClassNotFoundException)
[ https://issues.apache.org/jira/browse/HBASE-24808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170852#comment-17170852 ] Hudson commented on HBASE-24808: Results for branch branch-2.3 [build #199 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/199/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/199/General_20Nightly_20Build_20Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/199/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/199/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 jdk11 hadoop3 checks{color} -- For more information [see jdk11 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/199/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > skip empty log cleaner delegate class names (WAS => cleaner.CleanerChore: Can > NOT create CleanerDelegate= ClassNotFoundException) > - > > Key: HBASE-24808 > URL: https://issues.apache.org/jira/browse/HBASE-24808 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Trivial > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > 2020-07-31 00:19:49,839 WARN [master/ps0753:16000:becomeActiveMaster] > cleaner.CleanerChore: Can NOT create CleanerDelegate= > java.lang.ClassNotFoundException: > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > at java.lang.Class.forName0(Native Method) > at java.lang.Class.forName(Class.java:264) > at > org.apache.hadoop.hbase.master.cleaner.CleanerChore.newFileCleaner(CleanerChore.java:173) > at > org.apache.hadoop.hbase.master.cleaner.CleanerChore.initCleanerChain(CleanerChore.java:155) > at > org.apache.hadoop.hbase.master.cleaner.CleanerChore.(CleanerChore.java:105) > at > org.apache.hadoop.hbase.master.cleaner.HFileCleaner.(HFileCleaner.java:139) > at > org.apache.hadoop.hbase.master.cleaner.HFileCleaner.(HFileCleaner.java:120) > at > org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1424) > at > org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1025) > at > org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2189) > at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:609) > at java.lang.Thread.run(Thread.java:748) > > This is the config: > > > hbase.master.hfilecleaner.plugins > > > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465103092 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/Replication.java ## @@ -244,17 +248,22 @@ void addHFileRefsToQueue(TableName tableName, byte[] family, List
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465102036 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSource.java ## @@ -76,4 +77,13 @@ long getWALEditsRead(); long getShippedOps(); long getEditsFiltered(); + /** + * Sets the total usage of memory used by edits in memory read from WALs. + * @param usage The memory used by edits in bytes + */ + void setWALReaderEditsBufferBytes(long usage); Review comment: IIRC, the type hierarchy is terrible and prevented this from being done in a more clean way (that wouldn't require an unsafe cast), but let me double-check that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465101199 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationEndpoint.java ## @@ -497,6 +497,33 @@ public boolean canReplicateToSameCluster() { } } + public static class SleepingReplicationEndpointForTest extends ReplicationEndpointForTest { Review comment: Will do. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] joshelser commented on a change in pull request #2193: HBASE-24779 Report on the WAL edit buffer usage/limit for replication
joshelser commented on a change in pull request #2193: URL: https://github.com/apache/hbase/pull/2193#discussion_r465100973 ## File path: hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/replication/regionserver/MetricsReplicationSourceSourceImpl.java ## @@ -314,4 +314,15 @@ public String getMetricsName() { @Override public long getEditsFiltered() { return this.walEditsFilteredCounter.value(); } + + @Override + public void setWALReaderEditsBufferBytes(long usage) { +//noop. Global limit, tracked globally. Do not need per-source metrics Review comment: Once we chuck something into this usage, we have zero insight back to which source put it there. Wellington was working on this buffer tracking in HBASE-24813, but I don't think per-source tracking was "in scope" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…
Apache-HBase commented on pull request #2191: URL: https://github.com/apache/hbase/pull/2191#issuecomment-668551955 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 29s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 24s | master passed | | +1 :green_heart: | compile | 0m 54s | master passed | | +1 :green_heart: | shadedjars | 5m 35s | branch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 35s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 29s | the patch passed | | +1 :green_heart: | compile | 0m 53s | the patch passed | | +1 :green_heart: | javac | 0m 53s | the patch passed | | +1 :green_heart: | shadedjars | 5m 30s | patch has no errors when building our shaded downstream artifacts. | | +1 :green_heart: | javadoc | 0m 36s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 142m 41s | hbase-server in the patch passed. | | | | 166m 20s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2191 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 6dcb394db800 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 1.8.0_232 | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/testReport/ | | Max. process+thread count | 3896 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] Apache-HBase commented on pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…
Apache-HBase commented on pull request #2191: URL: https://github.com/apache/hbase/pull/2191#issuecomment-668549855 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 31s | Docker mode activated. | | -0 :warning: | yetus | 0m 3s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 18s | master passed | | +1 :green_heart: | compile | 1m 5s | master passed | | +1 :green_heart: | shadedjars | 5m 50s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 46s | hbase-server in master failed. | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 4m 17s | the patch passed | | +1 :green_heart: | compile | 1m 9s | the patch passed | | +1 :green_heart: | javac | 1m 9s | the patch passed | | +1 :green_heart: | shadedjars | 6m 4s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 42s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 133m 51s | hbase-server in the patch passed. | | | | 160m 38s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2191 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux a1647e3a638b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/testReport/ | | Max. process+thread count | 4138 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (HBASE-24815) hbase-connectors mvn install error
[ https://issues.apache.org/jira/browse/HBASE-24815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] leookok updated HBASE-24815: Description: *when maven command-line* mvn -Dspark.version=2.2.2 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install will return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\datasources\HBaseTableScanRDD.scala:216: overloaded method value addTaskCompletionListener with alternatives: (f: org.apache.spark.TaskContext => Unit)org.apache.spark.TaskContext (listener: org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext does not take type parameters {color:red}[ERROR] {color}one error found *but use the spark.version=2.4.0 is ok* mvn -Dspark.version=2.4.0 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install *other try* mvn -Dspark.version=3.0.0 -Dscala.version=2.12.12 -Dscala.binary.version=2.12 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:439: object SparkHadoopUtil in package deploy cannot be accessed in package org.apache.spark.deploy [ERROR] [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:487: not found: value SparkHadoopUtil {color:red}[ERROR]{color} two errors found go to the [spark @github|https://github.com/apache/spark/blob/e1ea806b3075d279b5f08a29fe4c1ad6d3c4191a/core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala] define SparkHadoopUtil to private[spark] {code:java} private[spark] class SparkHadoopUtil extends Logging {} {code} was: *when maven command-line* mvn -Dspark.version=2.2.2 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install will return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\datasources\HBaseTableScanRDD.scala:216: overloaded method value addTaskCompletionListener with alternatives: (f: org.apache.spark.TaskContext => Unit)org.apache.spark.TaskContext (listener: org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext does not take type parameters {color:red}[ERROR] {color}one error found *other try* mvn -Dspark.version=3.0.0 -Dscala.version=2.12.12 -Dscala.binary.version=2.12 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:439: object SparkHadoopUtil in package deploy cannot be accessed in package org.apache.spark.deploy [ERROR] [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:487: not found: value SparkHadoopUtil {color:red}[ERROR]{color} two errors found > hbase-connectors mvn install error > -- > > Key: HBASE-24815 > URL: https://issues.apache.org/jira/browse/HBASE-24815 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Reporter: leookok >Priority: Blocker > > *when maven command-line* > mvn -Dspark.version=2.2.2 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 > -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install > will return error > {color:red}[ERROR]{color} [Error] > F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\datasources\HBaseTableScanRDD.scala:216: > overloaded method value addTaskCompletionListener with alternatives: > (f: org.apache.spark.TaskContext => Unit)org.apache.spark.TaskContext > (listener: > org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext > does not take type parameters > {color:red}[ERROR] {color}one error found > *but use the spark.version=2.4.0 is ok* > mvn -Dspark.version=2.4.0 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 > -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install > > *other try* > mvn -Dspark.version=3.0.0 -Dscala.version=2.12.12 -Dscala.binary.version=2.12 > -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install > return error > {color:red}[ERROR]{color} [Error] > F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:439: > object SparkHadoopUtil in package deploy cannot be accessed in package > org.apache.spark.deploy > [ERROR] [Error] > F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:487: > not found: value SparkHadoop
[jira] [Commented] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170747#comment-17170747 ] ramkrishna.s.vasudevan commented on HBASE-24754: Thanks [~sreenivasulureddy]. though tags are not there I thought we still check cell.getTagsLength inside PrivateCellUtil#tagsIterator which will do Bytes.toInt. > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] WenFeiYi opened a new pull request #2194: HBASE-24665 MultiWAL : Avoid rolling of ALL WALs when one of the WAL needs a roll
WenFeiYi opened a new pull request #2194: URL: https://github.com/apache/hbase/pull/2194 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hbase] WenFeiYi closed pull request #2152: HBASE-24665 MultiWAL : Avoid rolling of ALL WALs when one of the WAL needs a roll
WenFeiYi closed pull request #2152: URL: https://github.com/apache/hbase/pull/2152 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170727#comment-17170727 ] Y. SREENIVASULU REDDY commented on HBASE-24754: --- Thanks [~ram_krish] for looking into this issue. In our case, we have the tags are empty. As you suggested, i have modified and tested, but there is no difference in the result. > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24295) [Chaos Monkey] abstract logging through the class hierarchy
[ https://issues.apache.org/jira/browse/HBASE-24295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170713#comment-17170713 ] Hudson commented on HBASE-24295: Results for branch branch-1 [build #1337 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1337/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1337//General_Nightly_Build_Report/] (x) {color:red}-1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1337//JDK7_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1337//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 source release artifact{color} -- See build output for details. > [Chaos Monkey] abstract logging through the class hierarchy > --- > > Key: HBASE-24295 > URL: https://issues.apache.org/jira/browse/HBASE-24295 > Project: HBase > Issue Type: Task > Components: integration tests >Affects Versions: 2.3.0 >Reporter: Nick Dimiduk >Assignee: Nick Dimiduk >Priority: Minor > Fix For: 3.0.0-alpha-1, 2.3.0, 1.7.0 > > > Running chaos monkey and watching the logs, it's very difficult to tell what > actions are actually running. There's lots of shared methods through the > class hierarchy that extends from {{abstract class Action}}, and each class > comes with its own {{Logger}}. As a result, the logs have useless stuff like > {noformat} > INFO actions.Action: Started regionserver... > {noformat} > Add {{protected abstract Logger getLogger()}} to the class's internal > interface, and have the concrete implementations provide their logger. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24795) RegionMover should deal with unknown (split/merged) regions
[ https://issues.apache.org/jira/browse/HBASE-24795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HBASE-24795: - Fix Version/s: 2.4.0 2.3.1 3.0.0-alpha-1 > RegionMover should deal with unknown (split/merged) regions > --- > > Key: HBASE-24795 > URL: https://issues.apache.org/jira/browse/HBASE-24795 > Project: HBase > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 3.0.0-alpha-1, 2.3.1, 2.4.0 > > > For a cluster with very high load, it is quite common to see flush/compaction > happening every minute on each RegionServer. And we have quite high chances > of multiple regions going through splitting/merging. > RegionMover, while unloading all regions (graceful stop), writes down all > regions to a local file and while loading them back (graceful start), ensures > to bring every single region back from other RSs. While loading regions back, > even if a single region can't be moved back, RegionMover considers load() > failure. We miss out on possibilities of some regions going through > split/merge process and the fact that not all regions written to local file > might even exist anymore. Hence, RegionMover should gracefully handle moving > any unknown region without marking load() failed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170710#comment-17170710 ] ramkrishna.s.vasudevan commented on HBASE-24754: Thanks [~sreenivasulureddy]. {code} TagUtil.carryForwardTags(tags, cell); {code} I think here we still find the length of the tag and then we check if the tags.isEmpty. [~sreenivasulureddy] - can you just {code} TagUtil.carryForwardTags(tags, cell); if (!tags.isEmpty()) { kv = (KeyValue) kvCreator.create(cell.getRowArray(), cell.getRowOffset(), cell.getRowLength(), cell.getFamilyArray(), cell.getFamilyOffset(), cell.getFamilyLength(), cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength(), cell.getTimestamp(), cell.getValueArray(), cell.getValueOffset(), cell.getValueLength(), tags); } else { kv = KeyValueUtil.ensureKeyValue(cell); } {code} replace the above code with {code} kv = KeyValueUtil.ensureKeyValue(cell); {code} like in branch-1.3 code and rerun the above exp with branch-2? > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170687#comment-17170687 ] Y. SREENIVASULU REDDY edited comment on HBASE-24754 at 8/4/20, 9:58 AM: Verified the mapper task operations, and data writing operation done by HFileOutputFormat2. There didn't observed any time taking operations. But Time differences observed in the PutSortReducer class for processing the "Put" objects. For the same executed the tests and posted the results here, please find the attached sample code to reproduce the issue, for between the Branch-2 and Branch-1.3 In Reduce operation to process the "PUT" objects observed the difference ~30% reduced. 1. Verified the test with 10 rows. 2. Each row size is ~1K. 3. Each row have single column-family and 300 qualifiers 4. Tested with java version (JDK1.8.0_232) 5. Test Results ||Rows processing Time||Branch 1.3 Time (ms)||Branch 2 Time (ms)||%Difference|| |Test 1|12545|18955|-33.8| |Test 2|12693|18840|-32.6| |Test 3|12694|18939|-32.9| was (Author: sreenivasulureddy): Attached the sample code to reproduce the issue, for between the Branch-2 and Branch-1.3 In Reduce operation to process the "PUT" objects observed the difference ~30% reduced. 1. Verified the test with 10 rows. 2. Each row size is ~1K. 3. Each row have single column-family and 300 qualifiers 4. Tested with java version (JDK1.8.0_232) 5. Test Results ||Rows processing Time||Branch 1.3 Time (ms)||Branch 2 Time (ms)||%Difference|| |Test 1|12545|18955|-33.8| |Test 2|12693|18840|-32.6| |Test 3|12694|18939|-32.9| > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2191: HBASE-24813 ReplicationSource should clear buffer usage on Replicatio…
Apache-HBase commented on pull request #2191: URL: https://github.com/apache/hbase/pull/2191#issuecomment-668496996 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | ||| _ master Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 43s | master passed | | +1 :green_heart: | checkstyle | 1m 6s | master passed | | +1 :green_heart: | spotbugs | 1m 59s | master passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 3m 25s | the patch passed | | -0 :warning: | checkstyle | 1m 4s | hbase-server: The patch generated 3 new + 4 unchanged - 6 fixed = 7 total (was 10) | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | hadoopcheck | 11m 2s | Patch does not cause any errors with Hadoop 3.1.2 3.2.1. | | +1 :green_heart: | spotbugs | 2m 6s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 15s | The patch does not generate ASF License warnings. | | | | 33m 7s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/artifact/yetus-general-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2191 | | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti checkstyle | | uname | Linux 52acb81cf204 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / d2f5a5f27b | | checkstyle | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt | | Max. process+thread count | 94 (vs. ulimit of 12500) | | modules | C: hbase-server U: hbase-server | | Console output | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2191/2/console | | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) spotbugs=3.1.12 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170687#comment-17170687 ] Y. SREENIVASULU REDDY commented on HBASE-24754: --- Attached the sample code to reproduce the issue, for between the Branch-2 and Branch-1.3 In Reduce operation to process the "PUT" objects observed the difference ~30% reduced. 1. Verified the test with 10 rows. 2. Each row size is ~1K. 3. Each row have single column-family and 300 qualifiers 4. Tested with java version (JDK1.8.0_232) 5. Test Results ||Rows processing Time||Branch 1.3 Time (ms)||Branch 2 Time (ms)||%Difference|| |Test 1|12545|18955|-33.8| |Test 2|12693|18840|-32.6| |Test 3|12694|18939|-32.9| > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24507) Remove HTableDescriptor and HColumnDescriptor
[ https://issues.apache.org/jira/browse/HBASE-24507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-24507: -- Release Note: Removed HTableDescriptor and HColumnDescritor. Please use TableDescriptor and ColumnFamilyDescriptor instead. Since the latter classes are immutable, you should use TableDescriptorBuilder and ColumnFamilyDescriptorBuilder to create them. TableDescriptorBuilder.ModifyableTableDescriptor and ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor are all changed from public to private now. It does not break our compatibilty rule as they are marked as IA.Private. But we do expose these two classes in some IA.Public classes, such as HBTU. So if you use these methods, you have to change your code. was: Removed HTableDescriptor and HColumnDescritor. Please use TableDescriptor and ColumnFamilyDescriptor instead. Since the latter classes are immutable, you should use TableDescriptorBuilder and ColumnFamilyDescriptorBuilder to create them. > Remove HTableDescriptor and HColumnDescriptor > - > > Key: HBASE-24507 > URL: https://issues.apache.org/jira/browse/HBASE-24507 > Project: HBase > Issue Type: Bug > Components: Client >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 3.0.0-alpha-1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Y. SREENIVASULU REDDY updated HBASE-24754: -- Attachment: Branch2_putSortReducer_sampleCode.patch > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch, > Branch2_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-24754) Bulk load performance is degraded in HBase 2
[ https://issues.apache.org/jira/browse/HBASE-24754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Y. SREENIVASULU REDDY updated HBASE-24754: -- Attachment: Branch1.3_putSortReducer_sampleCode.patch > Bulk load performance is degraded in HBase 2 > - > > Key: HBASE-24754 > URL: https://issues.apache.org/jira/browse/HBASE-24754 > Project: HBase > Issue Type: Bug > Components: Performance >Affects Versions: 2.2.3 >Reporter: Ajeet Rai >Priority: Major > Attachments: Branch1.3_putSortReducer_sampleCode.patch > > > in our Test,It is observed that Bulk load performance is degraded in HBase 2 . > Test Input: > 1: Table with 500 region(300 column family) > 2: data =2 TB > Data Sample > 186000120150205100068110,1860001,20150205,5,404,735412,2938,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,111,1 > 3: Cluster: 7 node(2 master+5 Region Server) > 4: No of Container Launched are same in both case > HBase 2 took 10% more time then HBase 1.3 where test input is same for both > cluster > > |Feature|HBase 2.2.3 > Time(Sec)|HBase 1.3.1 > Time(Sec)|Diff%|Snappy lib: > | > |BulkLoad|21837|19686.16|-10.93|Snappy lib: > HBase 2.2.3: 1.4 > HBase 1.3.1: 1.4| -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on pull request #2184: HBASE-24680 Refactor the checkAndMutate code on the server side
Apache-HBase commented on pull request #2184: URL: https://github.com/apache/hbase/pull/2184#issuecomment-668474837 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 11s | Docker mode activated. | | -0 :warning: | yetus | 0m 5s | Unprocessed flag(s): --brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list --whitespace-tabs-ignore-list --quick-hadoopcheck | ||| _ Prechecks _ | ||| _ branch-2 Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 5m 48s | branch-2 passed | | +1 :green_heart: | compile | 2m 42s | branch-2 passed | | +1 :green_heart: | shadedjars | 8m 0s | branch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 20s | hbase-hadoop-compat in branch-2 failed. | | -0 :warning: | javadoc | 0m 21s | hbase-hadoop2-compat in branch-2 failed. | | -0 :warning: | javadoc | 0m 34s | hbase-client in branch-2 failed. | | -0 :warning: | javadoc | 0m 50s | hbase-server in branch-2 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 17s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 5m 14s | the patch passed | | +1 :green_heart: | compile | 2m 50s | the patch passed | | +1 :green_heart: | javac | 2m 50s | the patch passed | | +1 :green_heart: | shadedjars | 7m 50s | patch has no errors when building our shaded downstream artifacts. | | -0 :warning: | javadoc | 0m 21s | hbase-hadoop-compat in the patch failed. | | -0 :warning: | javadoc | 0m 21s | hbase-hadoop2-compat in the patch failed. | | -0 :warning: | javadoc | 0m 33s | hbase-client in the patch failed. | | -0 :warning: | javadoc | 0m 48s | hbase-server in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 31s | hbase-hadoop-compat in the patch passed. | | +1 :green_heart: | unit | 0m 36s | hbase-hadoop2-compat in the patch passed. | | +1 :green_heart: | unit | 2m 35s | hbase-client in the patch passed. | | -1 :x: | unit | 695m 58s | hbase-server in the patch failed. | | | | 854m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.12 Server=19.03.12 base: https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/2184 | | Optional Tests | javac javadoc unit shadedjars compile | | uname | Linux 2170838e5f12 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2 / 86d2e37bc6 | | Default Java | 2020-01-14 | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-hadoop2-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-hadoop2-compat.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt | | javadoc | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt | | unit | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt | | Test Results | https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-2184/5/testReport/ | | Max. process+thread count | 3341 (vs. ulimit of 12500) | | modules | C: hbase-hadoop-compat hbase-hadoop2-compat hbase-client hbase-server U: . | | Console output | http
[jira] [Updated] (HBASE-24815) hbase-connectors mvn install error
[ https://issues.apache.org/jira/browse/HBASE-24815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] leookok updated HBASE-24815: Description: *when maven command-line* mvn -Dspark.version=2.2.2 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install will return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\datasources\HBaseTableScanRDD.scala:216: overloaded method value addTaskCompletionListener with alternatives: (f: org.apache.spark.TaskContext => Unit)org.apache.spark.TaskContext (listener: org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext does not take type parameters {color:red}[ERROR] {color}one error found *other try* mvn -Dspark.version=3.0.0 -Dscala.version=2.12.12 -Dscala.binary.version=2.12 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:439: object SparkHadoopUtil in package deploy cannot be accessed in package org.apache.spark.deploy [ERROR] [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:487: not found: value SparkHadoopUtil {color:red}[ERROR]{color} two errors found was: *when maven command-line* mvn -Dspark.version=2.2.2 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install will return error {color:red}[ERROR]{color} [Error] F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\datasources\HBaseTableScanRDD.scala:216: overloaded method value addTaskCompletionListener with alternatives: (f: org.apache.spark.TaskContext => Unit)org.apache.spark.TaskContext (listener: org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext does not take type parameters {color:red}[ERROR] {color}one error found *other try* mvn -Dspark.version=3.0.0 -Dscala.version=2.12.12 -Dscala.binary.version=2.12 -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install return error {color:red}[ERROR]{color} [Error] F:\projects\git-hub\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:439: object SparkHadoopUtil in package deploy cannot be accessed in package org.apache.spark.deploy [ERROR] [Error] F:\projects\git-hub\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:487: not found: value SparkHadoopUtil {color:red}[ERROR]{color} two errors found > hbase-connectors mvn install error > -- > > Key: HBASE-24815 > URL: https://issues.apache.org/jira/browse/HBASE-24815 > Project: HBase > Issue Type: Bug > Components: hbase-connectors >Reporter: leookok >Priority: Blocker > > *when maven command-line* > mvn -Dspark.version=2.2.2 -Dscala.version=2.11.7 -Dscala.binary.version=2.11 > -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install > will return error > {color:red}[ERROR]{color} [Error] > F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\datasources\HBaseTableScanRDD.scala:216: > overloaded method value addTaskCompletionListener with alternatives: > (f: org.apache.spark.TaskContext => Unit)org.apache.spark.TaskContext > (listener: > org.apache.spark.util.TaskCompletionListener)org.apache.spark.TaskContext > does not take type parameters > {color:red}[ERROR] {color}one error found > > *other try* > mvn -Dspark.version=3.0.0 -Dscala.version=2.12.12 -Dscala.binary.version=2.12 > -Dcheckstyle.skip=true -Dmaven.test.skip=true clean install > return error > {color:red}[ERROR]{color} [Error] > F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:439: > object SparkHadoopUtil in package deploy cannot be accessed in package > org.apache.spark.deploy > [ERROR] [Error] > F:\hbase-connectors\spark\hbase-spark\src\main\scala\org\apache\hadoop\hbase\spark\HBaseContext.scala:487: > not found: value SparkHadoopUtil > {color:red}[ERROR]{color} two errors found -- This message was sent by Atlassian Jira (v8.3.4#803005)