[jira] [Assigned] (HBASE-23042) Parameters are incorrect in procedures jsp
[ https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Mei reassigned HBASE-23042: -- Assignee: Yi Mei > Parameters are incorrect in procedures jsp > -- > > Key: HBASE-23042 > URL: https://issues.apache.org/jira/browse/HBASE-23042 > Project: HBase > Issue Type: Bug >Reporter: Yi Mei >Assignee: Yi Mei >Priority: Major > Attachments: 1.png > > > In procedures jps, the parameters of table name, region start end keys are > wrong, please see the first picture. > This is because all bytes params are encoded in base64. It is confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] mymeiyi commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
mymeiyi commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME URL: https://github.com/apache/hbase/pull/721#discussion_r335835605 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java ## @@ -1048,8 +1048,7 @@ * @return current live region servers list wrapped by {@link CompletableFuture} */ default CompletableFuture> getRegionServers() { -return getClusterMetrics(EnumSet.of(Option.LIVE_SERVERS)) - .thenApply(cm -> cm.getLiveServerMetrics().keySet()); +return getClusterMetrics(EnumSet.of(Option.SERVERS_NAME)).thenApply(cm -> cm.getServersName()); Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] mymeiyi commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
mymeiyi commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME URL: https://github.com/apache/hbase/pull/721#discussion_r335835232 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java ## @@ -204,9 +202,7 @@ private void genAssignmentPlan(TableName tableName, // Get the all the region servers List servers = new ArrayList<>(); -servers.addAll( - FutureUtils.get(getConnection().getAdmin().getClusterMetrics(EnumSet.of(Option.LIVE_SERVERS))) -.getLiveServerMetrics().keySet()); + servers.addAll(FutureUtils.get(getConnection().getAdmin().getRegionServers())); Review comment: It's a async admin and does not need to close. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953433#comment-16953433 ] Sean Busbey commented on HBASE-19663: - that's got it now. pretty good sign I should call it a night. :) no problem building branch-1.3 without the patch. still waiting on master build w/o patch. > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-19663: Attachment: HBASE-19663-branch-1.4.v0.patch > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953429#comment-16953429 ] Sean Busbey commented on HBASE-19663: - Huh. Thought so. Lemme go switch off mobile and upload again. > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953428#comment-16953428 ] Michael Stack commented on HBASE-19663: --- Did you post a patch [~busbey]? > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23177: -- Status: Patch Available (was: Reopened) branch-1.001 is branch-2 patch but w/o test (Pain bring back test changes so left them off). > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, > HBASE-23177.branch-1.001.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23177: -- Attachment: HBASE-23177.branch-1.001.patch > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, > HBASE-23177.branch-1.001.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23182) The create-release scripts are broken
Duo Zhang created HBASE-23182: - Summary: The create-release scripts are broken Key: HBASE-23182 URL: https://issues.apache.org/jira/browse/HBASE-23182 Project: HBase Issue Type: Bug Components: scripts Reporter: Duo Zhang Assignee: Duo Zhang Fix For: 3.0.0 Only several small bugs but it does make the releasing fail... Mostly introduced by HBASE-23092. Will upload the patch after I successully published 2.2.2RC0... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23092) Make the RM tooling in dev-tools/create-release generic
[ https://issues.apache.org/jira/browse/HBASE-23092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23092: -- Component/s: scripts > Make the RM tooling in dev-tools/create-release generic > --- > > Key: HBASE-23092 > URL: https://issues.apache.org/jira/browse/HBASE-23092 > Project: HBase > Issue Type: Task > Components: scripts > Environment: >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0 > > > The dev-tools/create-release scripts were originally about creating hbase > core RCs (Original idea and script versions were copied over from apache > spark). Subsequently, they were checked into hbase-operator-tools repo and > genericized so they worked in that context. Today, after a few mods, the > create-release scripts from hbase-operator-tools w/ some edits generated an > RC of hbase-thirdparty. > This issue is edits on the dev-tools/create-tools on master branch so the > scripts can create RCs across these three repos at least (with more to > follow). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23065) [hbtop] Top-N heavy hitter user and client drill downs
[ https://issues.apache.org/jira/browse/HBASE-23065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953410#comment-16953410 ] Toshihiro Suzuki commented on HBASE-23065: -- [~an...@apache.org] Will review this. Maybe I think I can do it by this weekend. > [hbtop] Top-N heavy hitter user and client drill downs > -- > > Key: HBASE-23065 > URL: https://issues.apache.org/jira/browse/HBASE-23065 > Project: HBase > Issue Type: Improvement > Components: hbtop, Operability >Reporter: Andrew Kyle Purtell >Assignee: Ankit Singhal >Priority: Major > > After HBASE-15519, or after an additional change on top of it that provides > necessary data in ClusterStatus, add drill down top-N views of activity > aggregated per user or per client IP. Only a relatively small N of the heavy > hitters need be tracked assuming this will be most useful when one or a > handful of users or clients is generating problematic load and hbtop is > invoked to learn their identity. > This is a critical missing piece. After drilling down to find hot regions or > tables, sometimes that is not enough, we also need to know which application > or subset of clients out of many may be the source of the hot spotting load. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
[ https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953404#comment-16953404 ] Michael Stack commented on HBASE-23181: --- [~busbey] Yeah, it should (has) flushed already but as part of close, we should be removing the region from accounting. Not sure why it is not being removed. No complaints around close. Let me at least add a continue if region not online so we don't get stuck like this. Will be workaround till we figure why this is happening. It is catastrophic when it does. The Region should be cleared from sequence id accounting in the close flush. I see flush message here: 2019-10-16 23:10:55,884 INFO org.apache.hadoop.hbase.regionserver.HRegion: Finished flush of dataSize ~4.30 MB/4511054, heapSize ~4.33 MB/4543520, currentSize=0 B/0 for 8ee433ad59526778c53cc85ed3762d0b in 47ms, sequenceid=271148, compaction requested=true ... just before the closed region message here... 2019-10-16 23:10:55,897 INFO org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 8ee433ad59526778c53cc85ed3762d0b so, the region should have been removed from sequence id accounting. [~gxcheng] No ASYNC_WAL in the mix here sir. Thanks for the intercession. > Blocked WAL archive: "LogRoller: Failed to schedule flush of > 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us" > -- > > Key: HBASE-23181 > URL: https://issues.apache.org/jira/browse/HBASE-23181 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > On a heavily loaded cluster, WAL count keeps rising and we can get into a > state where we are not rolling the logs off fast enough. In particular, there > is this interesting state at the extreme where we pick a region to flush > because 'Too many WALs' but the region is actually not online. As the WAL > count rises, we keep picking a region-to-flush that is no longer on the > server. This condition blocks our being able to clear WALs; eventually WALs > climb into the hundreds and the RS goes zombie with a full Call queue that > starts throwing CallQueueTooLargeExceptions (bad if this servers is the one > carrying hbase:meta). > Here is how it looks in the log: > {code} > # Here is region closing > 2019-10-16 23:10:55,897 INFO > org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed > 8ee433ad59526778c53cc85ed3762d0b > > # Then soon after ... > 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is > not online on us > 2019-10-16 23:11:45,006 INFO > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; > count=45, max=32; forcing flush of 1 regions(s): > 8ee433ad59526778c53cc85ed3762d0b > ... > # Later... > 2019-10-16 23:20:25,427 INFO > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; > count=542, max=32; forcing flush of 1 regions(s): > 8ee433ad59526778c53cc85ed3762d0b > 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is > not online on us > {code} > I've seen this runaway WALs in old 1.2.x hbase and this exception is from > 2.2.1. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
[ https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23181: -- Description: On a heavily loaded cluster, WAL count keeps rising and we can get into a state where we are not rolling the logs off fast enough. In particular, there is this interesting state at the extreme where we pick a region to flush because 'Too many WALs' but the region is actually not online. As the WAL count rises, we keep picking a region-to-flush that is no longer on the server. This condition blocks our being able to clear WALs; eventually WALs climb into the hundreds and the RS goes zombie with a full Call queue that starts throwing CallQueueTooLargeExceptions (bad if this servers is the one carrying hbase:meta). Here is how it looks in the log: {code} # Here is region closing 2019-10-16 23:10:55,897 INFO org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 8ee433ad59526778c53cc85ed3762d0b # Then soon after ... 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us 2019-10-16 23:11:45,006 INFO org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; count=45, max=32; forcing flush of 1 regions(s): 8ee433ad59526778c53cc85ed3762d0b ... # Later... 2019-10-16 23:20:25,427 INFO org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; count=542, max=32; forcing flush of 1 regions(s): 8ee433ad59526778c53cc85ed3762d0b 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us {code} I've seen this runaway WALs in old 1.2.x hbase and this exception is from 2.2.1. was: On a heavily loaded cluster, WAL count keeps rising and we can get into a state where we are not rolling the logs off fast enough. In particular, there is this interesting state at the extreme where we pick a region to flush because 'Too many WALs' but the region is actually not online. As the WAL count rises, we keep picking a region-to-flush that is no longer on the server. This condition blocks our being able to clear WALs; eventually WALs climb into the hundreds and the RS goes zombie with a full Call queue that starts throwing CallQueueTooLargeExceptions (bad if this servers is the one carrying hbase:meta). Here is how it looks in the log: {code} # Here is region closing 2019-10-16 23:10:55,897 INFO org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 8ee433ad59526778c53cc85ed3762d0b # Then soon after ... 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us 2019-10-16 23:11:45,006 INFO org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; count=45, max=32; forcing flush of 1 regions(s): 8ee433ad59526778c53cc85ed3762d0b ... # Later... 2019-10-16 23:20:25,427 INFO org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; count=542, max=32; forcing flush of 1 regions(s): 8ee433ad59526778c53cc85ed3762d0b 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us {code} > Blocked WAL archive: "LogRoller: Failed to schedule flush of > 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us" > -- > > Key: HBASE-23181 > URL: https://issues.apache.org/jira/browse/HBASE-23181 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > On a heavily loaded cluster, WAL count keeps rising and we can get into a > state where we are not rolling the logs off fast enough. In particular, there > is this interesting state at the extreme where we pick a region to flush > because 'Too many WALs' but the region is actually not online. As the WAL > count rises, we keep picking a region-to-flush that is no longer on the > server. This condition blocks our being able to clear WALs; eventually WALs > climb into the hundreds and the RS goes zombie with a full Call queue that > starts throwing CallQueueTooLargeExceptions (bad if this servers is the one > carrying hbase:meta). > Here is how it looks in the log: > {code} > # Here is region closing > 2019-10-16 23:10:55,897 INFO > org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed > 8ee433ad59526778c53cc85ed3762d0b > > # Then soon after ... > 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Fa
[jira] [Commented] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
[ https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953399#comment-16953399 ] Guangxu Cheng commented on HBASE-23181: --- Write data using ASYNC_WAL?If so, it may be related to HBASE-23157? > Blocked WAL archive: "LogRoller: Failed to schedule flush of > 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us" > -- > > Key: HBASE-23181 > URL: https://issues.apache.org/jira/browse/HBASE-23181 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > On a heavily loaded cluster, WAL count keeps rising and we can get into a > state where we are not rolling the logs off fast enough. In particular, there > is this interesting state at the extreme where we pick a region to flush > because 'Too many WALs' but the region is actually not online. As the WAL > count rises, we keep picking a region-to-flush that is no longer on the > server. This condition blocks our being able to clear WALs; eventually WALs > climb into the hundreds and the RS goes zombie with a full Call queue that > starts throwing CallQueueTooLargeExceptions (bad if this servers is the one > carrying hbase:meta). > Here is how it looks in the log: > {code} > # Here is region closing > 2019-10-16 23:10:55,897 INFO > org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed > 8ee433ad59526778c53cc85ed3762d0b > > # Then soon after ... > 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is > not online on us > 2019-10-16 23:11:45,006 INFO > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; > count=45, max=32; forcing flush of 1 regions(s): > 8ee433ad59526778c53cc85ed3762d0b > ... > # Later... > 2019-10-16 23:20:25,427 INFO > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; > count=542, max=32; forcing flush of 1 regions(s): > 8ee433ad59526778c53cc85ed3762d0b > 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is > not online on us > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953384#comment-16953384 ] Sean Busbey commented on HBASE-19663: - okay same patch on branch-1 fixes site building on branch-1 for me. > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
[ https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953381#comment-16953381 ] Sean Busbey commented on HBASE-23181: - good analysis. Is there a reason we couldn't check our set of online regions before scheduling a flush? if the region successfully closed it must have flushed out already right? > Blocked WAL archive: "LogRoller: Failed to schedule flush of > 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us" > -- > > Key: HBASE-23181 > URL: https://issues.apache.org/jira/browse/HBASE-23181 > Project: HBase > Issue Type: Bug >Reporter: Michael Stack >Priority: Major > > On a heavily loaded cluster, WAL count keeps rising and we can get into a > state where we are not rolling the logs off fast enough. In particular, there > is this interesting state at the extreme where we pick a region to flush > because 'Too many WALs' but the region is actually not online. As the WAL > count rises, we keep picking a region-to-flush that is no longer on the > server. This condition blocks our being able to clear WALs; eventually WALs > climb into the hundreds and the RS goes zombie with a full Call queue that > starts throwing CallQueueTooLargeExceptions (bad if this servers is the one > carrying hbase:meta). > Here is how it looks in the log: > {code} > # Here is region closing > 2019-10-16 23:10:55,897 INFO > org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed > 8ee433ad59526778c53cc85ed3762d0b > > # Then soon after ... > 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is > not online on us > 2019-10-16 23:11:45,006 INFO > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; > count=45, max=32; forcing flush of 1 regions(s): > 8ee433ad59526778c53cc85ed3762d0b > ... > # Later... > 2019-10-16 23:20:25,427 INFO > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; > count=542, max=32; forcing flush of 1 regions(s): > 8ee433ad59526778c53cc85ed3762d0b > 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: > Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is > not online on us > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22370) ByteBuf LEAK ERROR
[ https://issues.apache.org/jira/browse/HBASE-22370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953373#comment-16953373 ] Hudson commented on HBASE-22370: Results for branch branch-2.1 [build #1681 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ByteBuf LEAK ERROR > -- > > Key: HBASE-22370 > URL: https://issues.apache.org/jira/browse/HBASE-22370 > Project: HBase > Issue Type: Bug > Components: rpc, wal >Affects Versions: 2.2.1 >Reporter: Lijin Bin >Assignee: Lijin Bin >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-22370-master-v1.patch > > > We do failover test and throw a leak error, this is hard to reproduce. > {code} > 2019-05-06 02:30:27,781 ERROR [AsyncFSWAL-0] util.ResourceLeakDetector: LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > http://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > > org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:334) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:494) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513) > > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:144) > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:353) > > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:536) > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > java.lang.Thread.run(Thread.java:748) > {code} > If FanOutOneBlockAsyncDFSOutput#endBlock throw Exception before call > "buf.release();", this buf has not chance to release. > In CallRunner if the call skipped or Dropping timed out call, the call do not > call cleanup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953375#comment-16953375 ] Hudson commented on HBASE-23177: Results for branch branch-2.1 [build #1681 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953374#comment-16953374 ] Hudson commented on HBASE-20626: Results for branch branch-2.1 [build #1681 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1681//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
Michael Stack created HBASE-23181: - Summary: Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us" Key: HBASE-23181 URL: https://issues.apache.org/jira/browse/HBASE-23181 Project: HBase Issue Type: Bug Reporter: Michael Stack On a heavily loaded cluster, WAL count keeps rising and we can get into a state where we are not rolling the logs off fast enough. In particular, there is this interesting state at the extreme where we pick a region to flush because 'Too many WALs' but the region is actually not online. As the WAL count rises, we keep picking a region-to-flush that is no longer on the server. This condition blocks our being able to clear WALs; eventually WALs climb into the hundreds and the RS goes zombie with a full Call queue that starts throwing CallQueueTooLargeExceptions (bad if this servers is the one carrying hbase:meta). Here is how it looks in the log: {code} # Here is region closing 2019-10-16 23:10:55,897 INFO org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 8ee433ad59526778c53cc85ed3762d0b # Then soon after ... 2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us 2019-10-16 23:11:45,006 INFO org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; count=45, max=32; forcing flush of 1 regions(s): 8ee433ad59526778c53cc85ed3762d0b ... # Later... 2019-10-16 23:20:25,427 INFO org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; count=542, max=32; forcing flush of 1 regions(s): 8ee433ad59526778c53cc85ed3762d0b 2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953365#comment-16953365 ] Sean Busbey commented on HBASE-19663: - I get the same error while building branch-1. testing with the above patch now. > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"
[ https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HBASE-19663: Fix Version/s: (was: 3.0.0) Status: Patch Available (was: Open) -v0 specific to branch-1.4 - add jsr305 during javadoc building - doesn't add jsr305 during any other build steps or to our artifacts I'll keep digging, but so far I don't think this is still a problem at all on branches-2 or master. > site build fails complaining "javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found" > > > Key: HBASE-19663 > URL: https://issues.apache.org/jira/browse/HBASE-19663 > Project: HBase > Issue Type: Bug > Components: documentation, website >Reporter: Michael Stack >Assignee: Sean Busbey >Priority: Blocker > Fix For: 1.4.11 > > Attachments: script.sh > > > Cryptic failure trying to build beta-1 RC. Fails like this: > {code} > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 03:54 min > [INFO] Finished at: 2017-12-29T01:13:15-08:00 > [INFO] Final Memory: 381M/9165M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project > hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate: > [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS > [ERROR] reason: class file for javax.annotation.meta.When not found > [ERROR] warning: unknown enum constant When.UNKNOWN > [ERROR] warning: unknown enum constant When.MAYBE > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))" > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] > /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762: > warning - Tag @link: reference not found: #matchingRows(Cell, byte[])) > [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found. > [ERROR] javadoc: error - class file for > javax.annotation.meta.TypeQualifierNickname not found > [ERROR] > [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc > -J-Xmx2G @options @packages > [ERROR] > [ERROR] Refer to the generated Javadoc files in > '/home/stack/hbase.git/target/site/apidocs' dir. > [ERROR] -> [Help 1] > [ERROR] > [ERROR] To see the full stack trace of the errors, re-run Maven with the -e > switch. > [ERROR] Re-run Maven using the -X switch to enable full debug logging. > [ERROR] > [ERROR] For more information about the errors and possible solutions, please > read the following articles: > [ERROR] [Help 1] > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException > {code} > javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't > include this anywhere according to mvn dependency. > Happens building the User API both test and main. > Excluding these lines gets us passing again: > {code} > 3511 > 3512 > org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet > 3513 > 3514 > 3515 org.apache.yetus > 3516 audience-annotations > 3517 ${audience-annotations.version} > 3518 > + 3519 true > {code} > Tried upgrading to newer mvn site (ours is three years old) but that a > different set of problems. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23180) Create a nightly build to verify hbck2
[ https://issues.apache.org/jira/browse/HBASE-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sakthi updated HBASE-23180: --- Fix Version/s: hbase-operator-tools-1.1.0 > Create a nightly build to verify hbck2 > -- > > Key: HBASE-23180 > URL: https://issues.apache.org/jira/browse/HBASE-23180 > Project: HBase > Issue Type: Task >Reporter: Sakthi >Assignee: Sakthi >Priority: Major > Labels: hbck2 > Fix For: hbase-operator-tools-1.1.0 > > > Quoting myself from the discussion thread from the dev mailing list "*How do > we test hbck2?*" - > "Planning to start working on a nightly build that can spin up a > mini-cluster, load some data into it, do some actions to bring the cluster > into an undesirable state that hbck2 can fix and then invoke the hbck2 to see > if things work well. > > Plan is to start small with one of the hbck2 commands and remaining ones can > be added incrementally. As of now I would like to start with making sure the > job uses one of the hbase versions (probably 2.1.x/2.2.x), we can discuss > about the need to run the job against all the present hbase versions/taking > in a bunch of hbase versions as input and running against them/or just a > single version. > > The job script would be located in our operator-tools repo." -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23180) Create a nightly build to verify hbck2
Sakthi created HBASE-23180: -- Summary: Create a nightly build to verify hbck2 Key: HBASE-23180 URL: https://issues.apache.org/jira/browse/HBASE-23180 Project: HBase Issue Type: Task Reporter: Sakthi Assignee: Sakthi Quoting myself from the discussion thread from the dev mailing list "*How do we test hbck2?*" - "Planning to start working on a nightly build that can spin up a mini-cluster, load some data into it, do some actions to bring the cluster into an undesirable state that hbck2 can fix and then invoke the hbck2 to see if things work well. Plan is to start small with one of the hbck2 commands and remaining ones can be added incrementally. As of now I would like to start with making sure the job uses one of the hbase versions (probably 2.1.x/2.2.x), we can discuss about the need to run the job against all the present hbase versions/taking in a bunch of hbase versions as input and running against them/or just a single version. The job script would be located in our operator-tools repo." -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22370) ByteBuf LEAK ERROR
[ https://issues.apache.org/jira/browse/HBASE-22370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953337#comment-16953337 ] Hudson commented on HBASE-22370: Results for branch branch-2 [build #2325 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ByteBuf LEAK ERROR > -- > > Key: HBASE-22370 > URL: https://issues.apache.org/jira/browse/HBASE-22370 > Project: HBase > Issue Type: Bug > Components: rpc, wal >Affects Versions: 2.2.1 >Reporter: Lijin Bin >Assignee: Lijin Bin >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-22370-master-v1.patch > > > We do failover test and throw a leak error, this is hard to reproduce. > {code} > 2019-05-06 02:30:27,781 ERROR [AsyncFSWAL-0] util.ResourceLeakDetector: LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > http://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > > org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:334) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:494) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513) > > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:144) > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:353) > > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:536) > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > java.lang.Thread.run(Thread.java:748) > {code} > If FanOutOneBlockAsyncDFSOutput#endBlock throw Exception before call > "buf.release();", this buf has not chance to release. > In CallRunner if the call skipped or Dropping timed out call, the call do not > call cleanup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953336#comment-16953336 ] Hudson commented on HBASE-20626: Results for branch branch-2 [build #2325 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953338#comment-16953338 ] Hudson commented on HBASE-23177: Results for branch branch-2 [build #2325 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2325//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335783244 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) { return false; } + /** + * Get list of Mob column families (if any exists) + * @param htd table descriptor + * @return list of Mob column families + */ + public static List getMobColumnFamilies(TableDescriptor htd){ + +List fams = new ArrayList(); +ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies(); +for (ColumnFamilyDescriptor hcd : hcds) { + if (hcd.isMobEnabled()) { +fams.add(hcd); + } +} +return fams; + } + + /** + * Performs housekeeping file cleaning (called by MOB Cleaner chore) + * @param conf configuration + * @param table table name + * @throws IOException + */ + public static void cleanupObsoleteMobFiles(Configuration conf, TableName table) + throws IOException { + +try (final Connection conn = ConnectionFactory.createConnection(conf); +final Admin admin = conn.getAdmin();) { + TableDescriptor htd = admin.getDescriptor(table); + List list = getMobColumnFamilies(htd); + if (list.size() == 0) { +LOG.info("Skipping non-MOB table [" + table + "]"); +return; + } + Path rootDir = FSUtils.getRootDir(conf); + Path tableDir = FSUtils.getTableDir(rootDir, table); + // How safe is this call? + List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), tableDir); + + Set allActiveMobFileName = new HashSet(); Review comment: Log WARN if # of files exceed 1M. Is it OK? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] guangxuCheng edited a comment on issue #725: HBASE-23176 delete_all_snapshot does not work with regex
guangxuCheng edited a comment on issue #725: HBASE-23176 delete_all_snapshot does not work with regex URL: https://github.com/apache/hbase/pull/725#issuecomment-542964878 https://github.com/apache/hbase/blob/0043dfebc5e43705818071c3de062211943829f1/hbase-shell/src/main/ruby/shell/commands/delete_all_snapshot.rb#L31-L58 @karthikhw There are two places in delete_all_snapshot.rb (L37 and L56) that need to be modified, but you only modified one of them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] guangxuCheng commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex
guangxuCheng commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex URL: https://github.com/apache/hbase/pull/725#issuecomment-542964878 @karthikhw There are two places in delete_all_snapshot.rb that need to be modified, but you only modified one of them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r33534 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) { return false; } + /** + * Get list of Mob column families (if any exists) + * @param htd table descriptor + * @return list of Mob column families + */ + public static List getMobColumnFamilies(TableDescriptor htd){ + +List fams = new ArrayList(); +ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies(); +for (ColumnFamilyDescriptor hcd : hcds) { + if (hcd.isMobEnabled()) { +fams.add(hcd); + } +} +return fams; + } + + /** + * Performs housekeeping file cleaning (called by MOB Cleaner chore) + * @param conf configuration + * @param table table name + * @throws IOException + */ + public static void cleanupObsoleteMobFiles(Configuration conf, TableName table) + throws IOException { + +try (final Connection conn = ConnectionFactory.createConnection(conf); +final Admin admin = conn.getAdmin();) { + TableDescriptor htd = admin.getDescriptor(table); + List list = getMobColumnFamilies(htd); + if (list.size() == 0) { +LOG.info("Skipping non-MOB table [" + table + "]"); +return; + } + Path rootDir = FSUtils.getRootDir(conf); + Path tableDir = FSUtils.getTableDir(rootDir, table); + // How safe is this call? Review comment: Your second q. is what what I meant. Is there are chances, directory listing will contains partial results. In case of split - no. Daughter regions are created first and parent region is deleted once both daughters went through major compaction. In case of a merge, parents will be deleted only after merged region major compacted. So, we will never have partial (in terms of store files) result of a directory listing call. I assume, this call is safe for our purposes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Resolved] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite
[ https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu resolved HBASE-23107. -- Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2 & master, Thanks [~javaman_chen] for contributing , and thanks all for reviewing & feedback. > Avoid temp byte array creation when doing cacheDataOnWrite > -- > > Key: HBASE-23107 > URL: https://issues.apache.org/jira/browse/HBASE-23107 > Project: HBase > Issue Type: Improvement > Components: BlockCache, HFile >Reporter: chenxu >Assignee: chenxu >Priority: Major > Labels: gc > Fix For: 3.0.0, 2.3.0 > > Attachments: flamegraph_after.svg, flamegraph_before.svg > > > code in HFileBlock.Writer.cloneUncompressedBufferWithHeader > {code:java} > ByteBuffer cloneUncompressedBufferWithHeader() { > expectState(State.BLOCK_READY); > byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > … > } > {code} > When cacheOnWrite feature enabled, a temp byte array was created in order to > copy block’s data, we can avoid this by use of ByteBuffAllocator. This can > improve GC performance in write heavy scenarios. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite
[ https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-23107: - Labels: gc (was: ) > Avoid temp byte array creation when doing cacheDataOnWrite > -- > > Key: HBASE-23107 > URL: https://issues.apache.org/jira/browse/HBASE-23107 > Project: HBase > Issue Type: Improvement > Components: BlockCache, HFile >Reporter: chenxu >Assignee: chenxu >Priority: Major > Labels: gc > Fix For: 3.0.0, 2.3.0 > > Attachments: flamegraph_after.svg, flamegraph_before.svg > > > code in HFileBlock.Writer.cloneUncompressedBufferWithHeader > {code:java} > ByteBuffer cloneUncompressedBufferWithHeader() { > expectState(State.BLOCK_READY); > byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > … > } > {code} > When cacheOnWrite feature enabled, a temp byte array was created in order to > copy block’s data, we can avoid this by use of ByteBuffAllocator. This can > improve GC performance in write heavy scenarios. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite
[ https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-23107: - Fix Version/s: 2.3.0 3.0.0 > Avoid temp byte array creation when doing cacheDataOnWrite > -- > > Key: HBASE-23107 > URL: https://issues.apache.org/jira/browse/HBASE-23107 > Project: HBase > Issue Type: Improvement >Reporter: chenxu >Assignee: chenxu >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: flamegraph_after.svg, flamegraph_before.svg > > > code in HFileBlock.Writer.cloneUncompressedBufferWithHeader > {code:java} > ByteBuffer cloneUncompressedBufferWithHeader() { > expectState(State.BLOCK_READY); > byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > … > } > {code} > When cacheOnWrite feature enabled, a temp byte array was created in order to > copy block’s data, we can avoid this by use of ByteBuffAllocator. This can > improve GC performance in write heavy scenarios. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite
[ https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-23107: - Component/s: BlockCache > Avoid temp byte array creation when doing cacheDataOnWrite > -- > > Key: HBASE-23107 > URL: https://issues.apache.org/jira/browse/HBASE-23107 > Project: HBase > Issue Type: Improvement > Components: BlockCache, HFile >Reporter: chenxu >Assignee: chenxu >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: flamegraph_after.svg, flamegraph_before.svg > > > code in HFileBlock.Writer.cloneUncompressedBufferWithHeader > {code:java} > ByteBuffer cloneUncompressedBufferWithHeader() { > expectState(State.BLOCK_READY); > byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > … > } > {code} > When cacheOnWrite feature enabled, a temp byte array was created in order to > copy block’s data, we can avoid this by use of ByteBuffAllocator. This can > improve GC performance in write heavy scenarios. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite
[ https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zheng Hu updated HBASE-23107: - Component/s: HFile > Avoid temp byte array creation when doing cacheDataOnWrite > -- > > Key: HBASE-23107 > URL: https://issues.apache.org/jira/browse/HBASE-23107 > Project: HBase > Issue Type: Improvement > Components: HFile >Reporter: chenxu >Assignee: chenxu >Priority: Major > Fix For: 3.0.0, 2.3.0 > > Attachments: flamegraph_after.svg, flamegraph_before.svg > > > code in HFileBlock.Writer.cloneUncompressedBufferWithHeader > {code:java} > ByteBuffer cloneUncompressedBufferWithHeader() { > expectState(State.BLOCK_READY); > byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray(); > … > } > {code} > When cacheOnWrite feature enabled, a temp byte array was created in order to > copy block’s data, we can avoid this by use of ByteBuffAllocator. This can > improve GC performance in write heavy scenarios. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] openinx merged pull request #678: HBASE-23107 Avoid temp byte array creation when doing cacheDataOnWrite
openinx merged pull request #678: HBASE-23107 Avoid temp byte array creation when doing cacheDataOnWrite URL: https://github.com/apache/hbase/pull/678 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] openinx commented on a change in pull request #678: HBASE-23107 Avoid temp byte array creation when doing cacheDataOnWrite
openinx commented on a change in pull request #678: HBASE-23107 Avoid temp byte array creation when doing cacheDataOnWrite URL: https://github.com/apache/hbase/pull/678#discussion_r335778269 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java ## @@ -1012,6 +1021,18 @@ private void putHeader(byte[] dest, int offset, int onDiskSize, Bytes.putInt(dest, offset, onDiskDataSize); } +private void putHeader(ByteBuff buff, int onDiskSize, Review comment: That's OK. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r33534 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) { return false; } + /** + * Get list of Mob column families (if any exists) + * @param htd table descriptor + * @return list of Mob column families + */ + public static List getMobColumnFamilies(TableDescriptor htd){ + +List fams = new ArrayList(); +ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies(); +for (ColumnFamilyDescriptor hcd : hcds) { + if (hcd.isMobEnabled()) { +fams.add(hcd); + } +} +return fams; + } + + /** + * Performs housekeeping file cleaning (called by MOB Cleaner chore) + * @param conf configuration + * @param table table name + * @throws IOException + */ + public static void cleanupObsoleteMobFiles(Configuration conf, TableName table) + throws IOException { + +try (final Connection conn = ConnectionFactory.createConnection(conf); +final Admin admin = conn.getAdmin();) { + TableDescriptor htd = admin.getDescriptor(table); + List list = getMobColumnFamilies(htd); + if (list.size() == 0) { +LOG.info("Skipping non-MOB table [" + table + "]"); +return; + } + Path rootDir = FSUtils.getRootDir(conf); + Path tableDir = FSUtils.getTableDir(rootDir, table); + // How safe is this call? Review comment: Your second q. is what what I meant. Is there are chances, directory listing will not contains partial results. In case of split - no. Daughter regions are created first and parent region is deleted once both daughters went through major compaction. In case of a merge, parents will be deleted only after merged region major compacted. So, we will never have partial (in terms of store files) result of a directory listing call. I assume, this call is safe for our purposes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953310#comment-16953310 ] Hudson commented on HBASE-23168: Results for branch branch-2.2 [build #664 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > Attachments: HBASE-23168-branch-2.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23167) Set version as 2.2.2 in branch-2.2 in prep for first RC of 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953308#comment-16953308 ] Hudson commented on HBASE-23167: Results for branch branch-2.2 [build #664 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Set version as 2.2.2 in branch-2.2 in prep for first RC of 2.2.2 > > > Key: HBASE-23167 > URL: https://issues.apache.org/jira/browse/HBASE-23167 > Project: HBase > Issue Type: Sub-task > Components: build >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22370) ByteBuf LEAK ERROR
[ https://issues.apache.org/jira/browse/HBASE-22370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953307#comment-16953307 ] Hudson commented on HBASE-22370: Results for branch branch-2.2 [build #664 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > ByteBuf LEAK ERROR > -- > > Key: HBASE-22370 > URL: https://issues.apache.org/jira/browse/HBASE-22370 > Project: HBase > Issue Type: Bug > Components: rpc, wal >Affects Versions: 2.2.1 >Reporter: Lijin Bin >Assignee: Lijin Bin >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-22370-master-v1.patch > > > We do failover test and throw a leak error, this is hard to reproduce. > {code} > 2019-05-06 02:30:27,781 ERROR [AsyncFSWAL-0] util.ResourceLeakDetector: LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > http://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > > org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:334) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:494) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513) > > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:144) > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:353) > > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:536) > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > java.lang.Thread.run(Thread.java:748) > {code} > If FanOutOneBlockAsyncDFSOutput#endBlock throw Exception before call > "buf.release();", this buf has not chance to release. > In CallRunner if the call skipped or Dropping timed out call, the call do not > call cleanup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953309#comment-16953309 ] Hudson commented on HBASE-20626: Results for branch branch-2.2 [build #664 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/664//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase removed a comment on issue #623: HBASE-22749: Distributed MOB compactions
Apache-HBase removed a comment on issue #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#issuecomment-541875003 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | :blue_heart: | reexec | 0m 0s | Docker mode activated. | | :broken_heart: | patch | 0m 6s | https://github.com/apache/hbase/pull/623 does not apply to master. Rebase required? Wrong Branch? See https://yetus.apache.org/documentation/in-progress/precommit-patchnames for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hbase/pull/623 | | JIRA Issue | HBASE-22749 | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/2/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Reopened] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack reopened HBASE-23177: --- Reopen for branch-1 backport. > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335753662 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -370,6 +320,23 @@ public static Path getMobHome(Configuration conf) { return getMobHome(hbaseDir); } + /** + * Gets region encoded name from a MOB file name + * @param name name of a MOB file + * @return encoded region name + * + */ + public static String getEncodedRegionNameFromMobFileName(String mobFileName) Review comment: The method is not needed anymore. I removed it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335753132 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -513,6 +471,40 @@ public static void removeMobFiles(Configuration conf, FileSystem fs, TableName t storeFiles); } + /** + * Archives the mob files. + * @param conf The current configuration. + * @param tableName The table name. + * @param family The name of the column family. + * @param storeFiles The files to be archived. + * @throws IOException + */ + public static void removeMobFiles(Configuration conf, TableName tableName, +byte[] family, List storeFiles) throws IOException { + +if (storeFiles.size() == 0) { + // nothing to remove + LOG.debug("Skipping archiving old MOB file: collection is empty"); + return; +} +Path mobTableDir = FSUtils.getTableDir(MobUtils.getMobHome(conf), tableName); +FileSystem fs = storeFiles.get(0).getFileSystem(conf); +Path storeArchiveDir = HFileArchiveUtil.getStoreArchivePath(conf, getMobRegionInfo(tableName), + mobTableDir, family); + +for (Path p: storeFiles) { + Path archiveFilePath = new Path(storeArchiveDir, p.getName()); + if (fs.exists(archiveFilePath)) { +LOG.info(" MOB Cleaner skip archiving: " + p); Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335752452 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -561,91 +553,43 @@ public static Cell createMobRefCell(Cell cell, byte[] fileName, byte[] refCellTa public static StoreFileWriter createWriter(Configuration conf, FileSystem fs, ColumnFamilyDescriptor family, String date, Path basePath, long maxKeyCount, Compression.Algorithm compression, String startKey, CacheConfig cacheConfig, - Encryption.Context cryptoContext, boolean isCompaction) + Encryption.Context cryptoContext, boolean isCompaction, String regionName) throws IOException { MobFileName mobFileName = MobFileName.create(startKey, date, -UUID.randomUUID().toString().replaceAll("-", "")); +UUID.randomUUID().toString().replaceAll("-", ""), regionName); return createWriter(conf, fs, family, mobFileName, basePath, maxKeyCount, compression, cacheConfig, cryptoContext, isCompaction); } - /** - * Creates a writer for the ref file in temp directory. - * @param conf The current configuration. - * @param fs The current file system. - * @param family The descriptor of the current column family. - * @param basePath The basic path for a temp directory. - * @param maxKeyCount The key count. - * @param cacheConfig The current cache config. - * @param cryptoContext The encryption context. - * @param isCompaction If the writer is used in compaction. - * @return The writer for the mob file. - * @throws IOException - */ - public static StoreFileWriter createRefFileWriter(Configuration conf, FileSystem fs, -ColumnFamilyDescriptor family, Path basePath, long maxKeyCount, CacheConfig cacheConfig, -Encryption.Context cryptoContext, boolean isCompaction) -throws IOException { -return createWriter(conf, fs, family, - new Path(basePath, UUID.randomUUID().toString().replaceAll("-", "")), maxKeyCount, - family.getCompactionCompressionType(), cacheConfig, cryptoContext, - HStore.getChecksumType(conf), HStore.getBytesPerChecksum(conf), family.getBlocksize(), - family.getBloomFilterType(), isCompaction); - } - /** - * Creates a writer for the mob file in temp directory. - * @param conf The current configuration. - * @param fs The current file system. - * @param family The descriptor of the current column family. - * @param date The date string, its format is mmmdd. - * @param basePath The basic path for a temp directory. - * @param maxKeyCount The key count. - * @param compression The compression algorithm. - * @param startKey The start key. - * @param cacheConfig The current cache config. - * @param cryptoContext The encryption context. - * @param isCompaction If the writer is used in compaction. - * @return The writer for the mob file. - * @throws IOException - */ - public static StoreFileWriter createWriter(Configuration conf, FileSystem fs, - ColumnFamilyDescriptor family, String date, Path basePath, long maxKeyCount, - Compression.Algorithm compression, byte[] startKey, CacheConfig cacheConfig, - Encryption.Context cryptoContext, boolean isCompaction) - throws IOException { -MobFileName mobFileName = MobFileName.create(startKey, date, -UUID.randomUUID().toString().replaceAll("-", "")); -return createWriter(conf, fs, family, mobFileName, basePath, maxKeyCount, compression, - cacheConfig, cryptoContext, isCompaction); - } +// /** +// * Creates a writer for the mob file in temp directory. +// * @param conf The current configuration. +// * @param fs The current file system. +// * @param family The descriptor of the current column family. +// * @param date The date string, its format is mmmdd. +// * @param basePath The basic path for a temp directory. +// * @param maxKeyCount The key count. +// * @param compression The compression algorithm. +// * @param startKey The start key. +// * @param cacheConfig The current cache config. +// * @param cryptoContext The encryption context. +// * @param isCompaction If the writer is used in compaction. +// * @return The writer for the mob file. +// * @throws IOException +// */ +// public static StoreFileWriter createWriter(Configuration conf, FileSystem fs, Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335752296 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java ## @@ -513,6 +471,40 @@ public static void removeMobFiles(Configuration conf, FileSystem fs, TableName t storeFiles); } + /** + * Archives the mob files. + * @param conf The current configuration. + * @param tableName The table name. + * @param family The name of the column family. + * @param storeFiles The files to be archived. + * @throws IOException + */ + public static void removeMobFiles(Configuration conf, TableName tableName, +byte[] family, List storeFiles) throws IOException { + +if (storeFiles.size() == 0) { + // nothing to remove + LOG.debug("Skipping archiving old MOB file: collection is empty"); + return; +} +Path mobTableDir = FSUtils.getTableDir(MobUtils.getMobHome(conf), tableName); +FileSystem fs = storeFiles.get(0).getFileSystem(conf); +Path storeArchiveDir = HFileArchiveUtil.getStoreArchivePath(conf, getMobRegionInfo(tableName), + mobTableDir, family); + +for (Path p: storeFiles) { + Path archiveFilePath = new Path(storeArchiveDir, p.getName()); + if (fs.exists(archiveFilePath)) { +LOG.info(" MOB Cleaner skip archiving: " + p); Review comment: Fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953267#comment-16953267 ] Sean Busbey commented on HBASE-23177: - can we get some branch-1 love on this? if not, I'll circle back. > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335748358 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileName.java ## @@ -148,6 +156,13 @@ public String getStartKey() { return startKey; } + /** + * Gets region name Review comment: Fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335747278 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobFileCleanerTool.java ## @@ -43,9 +43,9 @@ * The cleaner to delete the expired MOB files. */ @InterfaceAudience.Private -public class ExpiredMobFileCleaner extends Configured implements Tool { +public class MobFileCleanerTool extends Configured implements Tool { Review comment: Make sense. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23065) [hbtop] Top-N heavy hitter user and client drill downs
[ https://issues.apache.org/jira/browse/HBASE-23065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16953202#comment-16953202 ] Ankit Singhal commented on HBASE-23065: --- [~apurtell]/[~brfrn169]/[~busbey]/[~elserj], can anybody please help me with the review? > [hbtop] Top-N heavy hitter user and client drill downs > -- > > Key: HBASE-23065 > URL: https://issues.apache.org/jira/browse/HBASE-23065 > Project: HBase > Issue Type: Improvement > Components: hbtop, Operability >Reporter: Andrew Kyle Purtell >Assignee: Ankit Singhal >Priority: Major > > After HBASE-15519, or after an additional change on top of it that provides > necessary data in ClusterStatus, add drill down top-N views of activity > aggregated per user or per client IP. Only a relatively small N of the heavy > hitters need be tracked assuming this will be most useful when one or a > handful of users or clients is generating problematic load and hbtop is > invoked to learn their identity. > This is a critical missing piece. After drilling down to find hot regions or > tables, sometimes that is not enough, we also need to know which application > or subset of clients out of many may be the source of the hot spotting load. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack resolved HBASE-23177. --- Fix Version/s: 2.2.3 2.1.8 2.3.0 3.0.0 Hadoop Flags: Reviewed Release Note: Changes the message on the FNFE exception thrown when the file a Reference points to is missing; the message now includes detail on Reference as well as pointed-to file so can connect how FNFE relates to region open. Assignee: Michael Stack Resolution: Fixed > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3 > > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference
[ https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Stack updated HBASE-23177: -- Description: If root file for a Reference is missing, takes a while to figure it. Master-side says failed open of Region. RegionServer side it talks about FNFE for some random file. Better, dump out Reference data. Helps figuring what has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. (was: If root file for a Reference is missing, takes a while to figure it. Master-side says failed open of Region. RegionServer side it talks about FNFE for some random file. Better, dump out Reference data.) > If fail to open reference because FNFE, make it plain it is a Reference > --- > > Key: HBASE-23177 > URL: https://issues.apache.org/jira/browse/HBASE-23177 > Project: HBase > Issue Type: Bug > Components: Operability >Reporter: Michael Stack >Priority: Major > Attachments: > 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch > > > If root file for a Reference is missing, takes a while to figure it. > Master-side says failed open of Region. RegionServer side it talks about FNFE > for some random file. Better, dump out Reference data. Helps figuring what > has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] saintstack closed pull request #726: HBASE-23177 If fail to open reference because FNFE, make it plain it …
saintstack closed pull request #726: HBASE-23177 If fail to open reference because FNFE, make it plain it … URL: https://github.com/apache/hbase/pull/726 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] saintstack commented on issue #726: HBASE-23177 If fail to open reference because FNFE, make it plain it …
saintstack commented on issue #726: HBASE-23177 If fail to open reference because FNFE, make it plain it … URL: https://github.com/apache/hbase/pull/726#issuecomment-542857303 Merged it offline so I could add in handling of @Apache9 suggestion. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] karthikhw commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex
karthikhw commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex URL: https://github.com/apache/hbase/pull/725#issuecomment-542828098 Thank you very much @guangxuCheng for checking this. There are 3 commits here, the first commit changes for delete_all_snapshot.rb. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache-HBase commented on issue #677: HBASE-23073 Add an optional costFunction to balance regions according to a capacity rule
Apache-HBase commented on issue #677: HBASE-23073 Add an optional costFunction to balance regions according to a capacity rule URL: https://github.com/apache/hbase/pull/677#issuecomment-542800641 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | :blue_heart: | reexec | 1m 31s | Docker mode activated. | ||| _ Prechecks _ | | :green_heart: | dupname | 0m 0s | No case conflicting files found. | | :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ master Compile Tests _ | | :green_heart: | mvninstall | 5m 55s | master passed | | :green_heart: | compile | 0m 57s | master passed | | :green_heart: | checkstyle | 1m 30s | master passed | | :green_heart: | shadedjars | 4m 58s | branch has no errors when building our shaded downstream artifacts. | | :green_heart: | javadoc | 0m 36s | master passed | | :blue_heart: | spotbugs | 4m 31s | Used deprecated FindBugs config; considering switching to SpotBugs. | | :green_heart: | findbugs | 4m 29s | master passed | ||| _ Patch Compile Tests _ | | :green_heart: | mvninstall | 5m 26s | the patch passed | | :green_heart: | compile | 0m 56s | the patch passed | | :green_heart: | javac | 0m 56s | the patch passed | | :green_heart: | checkstyle | 1m 29s | the patch passed | | :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | :green_heart: | shadedjars | 4m 55s | patch has no errors when building our shaded downstream artifacts. | | :green_heart: | hadoopcheck | 17m 9s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | :green_heart: | javadoc | 0m 35s | the patch passed | | :green_heart: | findbugs | 4m 16s | the patch passed | ||| _ Other Tests _ | | :green_heart: | unit | 156m 40s | hbase-server in the patch passed. | | :green_heart: | asflicense | 0m 30s | The patch does not generate ASF License warnings. | | | | 218m 34s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-677/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/677 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 1f969b0b19c3 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-677/out/precommit/personality/provided.sh | | git revision | master / 395cfceb0b | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-677/6/testReport/ | | Max. process+thread count | 4265 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-677/6/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952906#comment-16952906 ] HBase QA commented on HBASE-23168: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} branch-2.2 Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/PreCommit-HBASE-Build/959/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-23168 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12983182/HBASE-23168-branch-2.2.patch | | Optional Tests | dupname asflicense markdownlint | | uname | Linux 246ba1e63b6e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | branch-2.2 / f7474aeab5 | | Max. process+thread count | 46 (vs. ulimit of 1) | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HBASE-Build/959/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) | | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org | This message was automatically generated. > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > Attachments: HBASE-23168-branch-2.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23179) Put up 2.2.2RC0
[ https://issues.apache.org/jira/browse/HBASE-23179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-23179: - Assignee: Duo Zhang > Put up 2.2.2RC0 > --- > > Key: HBASE-23179 > URL: https://issues.apache.org/jira/browse/HBASE-23179 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HBASE-23179) Put up 2.2.2RC0
Duo Zhang created HBASE-23179: - Summary: Put up 2.2.2RC0 Key: HBASE-23179 URL: https://issues.apache.org/jira/browse/HBASE-23179 Project: HBase Issue Type: Sub-task Reporter: Duo Zhang -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23168: -- Attachment: HBASE-23168-branch-2.2.patch > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > Attachments: HBASE-23168-branch-2.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23168: -- Status: Patch Available (was: In Progress) > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > Attachments: HBASE-23168-branch-2.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-20626: -- Release Note: Use 'totalRowActionRequestCount' to calculate QPS on web UI. > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#discussion_r335519658 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobConstants.java ## @@ -55,33 +55,23 @@ public static final long DEFAULT_MOB_CACHE_EVICT_PERIOD = 3600L; public final static String TEMP_DIR_NAME = ".tmp"; - public final static String BULKLOAD_DIR_NAME = ".bulkload"; public final static byte[] MOB_TABLE_LOCK_SUFFIX = Bytes.toBytes(".mobLock"); - public final static String EMPTY_STRING = ""; Review comment: We're not supposed to make binary incompatible changes to IA.Public classes except after a deprecation cycle. for the master branch we'll either need to mark it deprecated with expected removal in 4.0, or we'll need to call it out in the release note as removed before then and why that was necessary. I'd just deprecate it. in any case we'll need to make sure it doesn't get removed in any backports. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] busbey commented on issue #623: HBASE-22749: Distributed MOB compactions
busbey commented on issue #623: HBASE-22749: Distributed MOB compactions URL: https://github.com/apache/hbase/pull/623#issuecomment-542731467 please push the current state of the work to the PR branch. it's much harder to follow conversation and track the code off of an outdated commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952879#comment-16952879 ] Hudson commented on HBASE-20626: Results for branch master [build #1507 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1507/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/master/1507//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1507//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/master/1507//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reassigned HBASE-23168: - Assignee: Duo Zhang > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work started] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-23168 started by Duo Zhang. - > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang reopened HBASE-23168: --- > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23168) Generate CHANGES.md and RELEASENOTES.md for 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-23168. --- Resolution: Fixed For generating CHANGES.md > Generate CHANGES.md and RELEASENOTES.md for 2.2.2 > - > > Key: HBASE-23168 > URL: https://issues.apache.org/jira/browse/HBASE-23168 > Project: HBase > Issue Type: Sub-task > Components: documentation >Reporter: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-20626: -- Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-2.2 and branch-2.1. Resolve now for releasing 2.2.2. [~apurtell] Feel free to reopen it for applying to 1.x. Thanks [~gxcheng]. > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HBASE-23167) Set version as 2.2.2 in branch-2.2 in prep for first RC of 2.2.2
[ https://issues.apache.org/jira/browse/HBASE-23167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang resolved HBASE-23167. --- Hadoop Flags: Reviewed Resolution: Fixed Pushed to branch-2.2. Thanks [~zghao] for reviewing. > Set version as 2.2.2 in branch-2.2 in prep for first RC of 2.2.2 > > > Key: HBASE-23167 > URL: https://issues.apache.org/jira/browse/HBASE-23167 > Project: HBase > Issue Type: Sub-task > Components: build >Reporter: Duo Zhang >Assignee: Duo Zhang >Priority: Major > Fix For: 2.2.2 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #727: HBASE-23167 Set version as 2.2.2 in branch-2.2 in prep for first RC o…
Apache9 merged pull request #727: HBASE-23167 Set version as 2.2.2 in branch-2.2 in prep for first RC o… URL: https://github.com/apache/hbase/pull/727 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] PierreZ commented on issue #677: HBASE-23073 Add an optional costFunction to balance regions according to a capacity rule
PierreZ commented on issue #677: HBASE-23073 Add an optional costFunction to balance regions according to a capacity rule URL: https://github.com/apache/hbase/pull/677#issuecomment-542698253 I added a test to load the rule files from HDFS This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-22370) ByteBuf LEAK ERROR
[ https://issues.apache.org/jira/browse/HBASE-22370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22370: -- Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to branch-2.1+. Thanks [~binlijin] for contributing. > ByteBuf LEAK ERROR > -- > > Key: HBASE-22370 > URL: https://issues.apache.org/jira/browse/HBASE-22370 > Project: HBase > Issue Type: Bug > Components: rpc, wal >Affects Versions: 2.2.1 >Reporter: Lijin Bin >Assignee: Lijin Bin >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-22370-master-v1.patch > > > We do failover test and throw a leak error, this is hard to reproduce. > {code} > 2019-05-06 02:30:27,781 ERROR [AsyncFSWAL-0] util.ResourceLeakDetector: LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > http://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > > org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:334) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:494) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513) > > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:144) > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:353) > > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:536) > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > java.lang.Thread.run(Thread.java:748) > {code} > If FanOutOneBlockAsyncDFSOutput#endBlock throw Exception before call > "buf.release();", this buf has not chance to release. > In CallRunner if the call skipped or Dropping timed out call, the call do not > call cleanup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on issue #727: HBASE-23167 Set version as 2.2.2 in branch-2.2 in prep for first RC o…
Apache-HBase commented on issue #727: HBASE-23167 Set version as 2.2.2 in branch-2.2 in prep for first RC o… URL: https://github.com/apache/hbase/pull/727#issuecomment-542681098 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | :blue_heart: | reexec | 0m 40s | Docker mode activated. | ||| _ Prechecks _ | | :green_heart: | dupname | 0m 1s | No case conflicting files found. | | :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | :yellow_heart: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.2 Compile Tests _ | | :blue_heart: | mvndep | 0m 26s | Maven dependency ordering for branch | | :green_heart: | mvninstall | 5m 18s | branch-2.2 passed | | :green_heart: | compile | 2m 51s | branch-2.2 passed | | :green_heart: | checkstyle | 2m 30s | branch-2.2 passed | | :green_heart: | shadedjars | 4m 3s | branch has no errors when building our shaded downstream artifacts. | | :green_heart: | javadoc | 12m 55s | branch-2.2 passed | ||| _ Patch Compile Tests _ | | :green_heart: | mvninstall | 4m 43s | branch-2.2 passed | | :blue_heart: | mvndep | 5m 8s | Maven dependency ordering for patch | | :green_heart: | mvninstall | 4m 47s | the patch passed | | :green_heart: | compile | 2m 53s | the patch passed | | :green_heart: | javac | 2m 53s | the patch passed | | :green_heart: | checkstyle | 2m 21s | the patch passed | | :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | :green_heart: | xml | 0m 56s | The patch has no ill-formed XML file. | | :green_heart: | shadedjars | 4m 0s | patch has no errors when building our shaded downstream artifacts. | | :green_heart: | hadoopcheck | 14m 52s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | :green_heart: | javadoc | 13m 30s | the patch passed | ||| _ Other Tests _ | | :green_heart: | unit | 219m 44s | root in the patch passed. | | :green_heart: | asflicense | 24m 24s | The patch does not generate ASF License warnings. | | | | 332m 56s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-727/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/727 | | Optional Tests | dupname asflicense javac javadoc unit shadedjars hadoopcheck xml compile checkstyle | | uname | Linux cbd19b8811a8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-727/out/precommit/personality/provided.sh | | git revision | branch-2.2 / 9b0980fbfb | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-727/1/testReport/ | | Max. process+thread count | 5496 (vs. ulimit of 1) | | modules | C: hbase-checkstyle hbase-annotations hbase-build-configuration hbase-protocol-shaded hbase-common hbase-metrics-api hbase-hadoop-compat hbase-metrics hbase-hadoop2-compat hbase-protocol hbase-client hbase-zookeeper hbase-replication hbase-resource-bundle hbase-http hbase-procedure hbase-server hbase-mapreduce hbase-testing-util hbase-thrift hbase-rsgroup hbase-shell hbase-endpoint hbase-it hbase-rest hbase-examples hbase-shaded hbase-shaded/hbase-shaded-client hbase-shaded/hbase-shaded-client-byo-hadoop hbase-shaded/hbase-shaded-mapreduce hbase-external-blockcache hbase-hbtop hbase-assembly hbase-shaded/hbase-shaded-testing-util hbase-shaded/hbase-shaded-testing-util-tester hbase-shaded/hbase-shaded-check-invariants hbase-shaded/hbase-shaded-with-hadoop-check-invariants hbase-archetypes hbase-archetypes/hbase-client-project hbase-archetypes/hbase-shaded-client-project hbase-archetypes/hbase-archetype-builder . U: . | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-727/1/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) | | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache G
[jira] [Updated] (HBASE-22370) ByteBuf LEAK ERROR
[ https://issues.apache.org/jira/browse/HBASE-22370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22370: -- Fix Version/s: 2.1.8 2.2.2 2.3.0 3.0.0 > ByteBuf LEAK ERROR > -- > > Key: HBASE-22370 > URL: https://issues.apache.org/jira/browse/HBASE-22370 > Project: HBase > Issue Type: Bug > Components: rpc, wal >Affects Versions: 2.2.1 >Reporter: Lijin Bin >Assignee: Lijin Bin >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-22370-master-v1.patch > > > We do failover test and throw a leak error, this is hard to reproduce. > {code} > 2019-05-06 02:30:27,781 ERROR [AsyncFSWAL-0] util.ResourceLeakDetector: LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > http://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > Created at: > > org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:334) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187) > > org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:494) > > org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513) > > org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:144) > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:353) > > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:536) > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > java.lang.Thread.run(Thread.java:748) > {code} > If FanOutOneBlockAsyncDFSOutput#endBlock throw Exception before call > "buf.release();", this buf has not chance to release. > In CallRunner if the call skipped or Dropping timed out call, the call do not > call cleanup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache9 merged pull request #720: HBASE-22370 ByteBuf LEAK ERROR
Apache9 merged pull request #720: HBASE-22370 ByteBuf LEAK ERROR URL: https://github.com/apache/hbase/pull/720 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache9 commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
Apache9 commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME URL: https://github.com/apache/hbase/pull/721#discussion_r335447694 ## File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java ## @@ -1048,8 +1048,7 @@ * @return current live region servers list wrapped by {@link CompletableFuture} */ default CompletableFuture> getRegionServers() { -return getClusterMetrics(EnumSet.of(Option.LIVE_SERVERS)) - .thenApply(cm -> cm.getLiveServerMetrics().keySet()); +return getClusterMetrics(EnumSet.of(Option.SERVERS_NAME)).thenApply(cm -> cm.getServersName()); Review comment: ClusterMetrics::getServersName? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache9 commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
Apache9 commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME URL: https://github.com/apache/hbase/pull/721#discussion_r335447964 ## File path: hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java ## @@ -204,9 +202,7 @@ private void genAssignmentPlan(TableName tableName, // Get the all the region servers List servers = new ArrayList<>(); -servers.addAll( - FutureUtils.get(getConnection().getAdmin().getClusterMetrics(EnumSet.of(Option.LIVE_SERVERS))) -.getLiveServerMetrics().keySet()); + servers.addAll(FutureUtils.get(getConnection().getAdmin().getRegionServers())); Review comment: Not introduced by you, but I wonder whether we need to close the Admin here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [hbase] Apache9 commented on a change in pull request #726: HBASE-23177 If fail to open reference because FNFE, make it plain it …
Apache9 commented on a change in pull request #726: HBASE-23177 If fail to open reference because FNFE, make it plain it … URL: https://github.com/apache/hbase/pull/726#discussion_r335444663 ## File path: hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java ## @@ -19,11 +19,17 @@ import static org.junit.Assert.*; Review comment: Avoid start import? Not your fault but I think we can fix this checkstyle issue by the way. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Updated] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-20626: -- Fix Version/s: 2.1.8 2.2.2 2.3.0 3.0.0 > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23151) Backport HBASE-23083 (Collect Executor status info periodically and report to metrics system) to branch-1
[ https://issues.apache.org/jira/browse/HBASE-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952734#comment-16952734 ] Hudson commented on HBASE-23151: Results for branch branch-1 [build #1108 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108//JDK7_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Backport HBASE-23083 (Collect Executor status info periodically and report to > metrics system) to branch-1 > - > > Key: HBASE-23151 > URL: https://issues.apache.org/jira/browse/HBASE-23151 > Project: HBase > Issue Type: Sub-task >Reporter: Andrew Kyle Purtell >Assignee: chenxu >Priority: Minor > Fix For: 1.5.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23083) Collect Executor status info periodically and report to metrics system
[ https://issues.apache.org/jira/browse/HBASE-23083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952735#comment-16952735 ] Hudson commented on HBASE-23083: Results for branch branch-1 [build #1108 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108/]: (/) *{color:green}+1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108//General_Nightly_Build_Report/] (/) {color:green}+1 jdk7 checks{color} -- For more information [see jdk7 report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108//JDK7_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/1108//JDK8_Nightly_Build_Report_(Hadoop2)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. > Collect Executor status info periodically and report to metrics system > -- > > Key: HBASE-23083 > URL: https://issues.apache.org/jira/browse/HBASE-23083 > Project: HBase > Issue Type: Improvement >Reporter: chenxu >Assignee: chenxu >Priority: Major > Fix For: 3.0.0, 2.3.0 > > > HRegionServer#startServiceThreads will start some Executors, but we don't > have a good way to know their status, such as how many threads pending, and > how many threads running. Can add an ScheduledChore to collect the > information periodically and report to metrics system. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] guangxuCheng commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex
guangxuCheng commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex URL: https://github.com/apache/hbase/pull/725#issuecomment-542654359 https://github.com/apache/hbase/blob/fa05907b1b5ff3ac440a43937cae386b3638de5a/hbase-shell/src/main/ruby/shell/commands/delete_all_snapshot.rb#L57 Also need to be modified here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952724#comment-16952724 ] Guangxu Cheng commented on HBASE-20626: --- All tests pass locally. Pushed to master and branch-2. [~apurtell] branch-1 and branch-1.4 need this ? [~zhangduo] [~zghao] What about branch-2.2 and branch-2.1 ? > Change the value of "Requests Per Second" on WEBUI > -- > > Key: HBASE-20626 > URL: https://issues.apache.org/jira/browse/HBASE-20626 > Project: HBase > Issue Type: Improvement > Components: metrics, UI >Reporter: Guangxu Cheng >Assignee: Guangxu Cheng >Priority: Major > Attachments: HBASE-20626.master.001.patch, > HBASE-20626.master.002.patch, HBASE-20626.master.003.patch > > > Now we use "totalRequestCount"(RSRpcServices#requestCount) to calculate > requests per second. > After HBASE-18469, "totalRequestCount" count only once for multi > request.(Includes requests that are not serviced by regions.) > When we have a large number of read and write requests, the value of > "Requests Per Second" is very small which does not reflect the load of the > cluster. > Maybe it is more reasonable to use "totalRowActionRequestCount" to calculate > RPS? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23156) start-hbase.sh failed with ClassNotFoundException when build with hadoop3
[ https://issues.apache.org/jira/browse/HBASE-23156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952717#comment-16952717 ] Guanghao Zhang commented on HBASE-23156: The woodstox-core-5.0.3.jar was moved to lib/jdk11. > start-hbase.sh failed with ClassNotFoundException when build with hadoop3 > - > > Key: HBASE-23156 > URL: https://issues.apache.org/jira/browse/HBASE-23156 > Project: HBase > Issue Type: Bug >Reporter: Guanghao Zhang >Priority: Major > > {code:java} > Exception in thread "main" java.lang.NoClassDefFoundError: > com/ctc/wstx/io/InputBootstrapperException in thread "main" > java.lang.NoClassDefFoundError: com/ctc/wstx/io/InputBootstrapper at > org.apache.hadoop.hbase.util.HBaseConfTool.main(HBaseConfTool.java:39)Caused > by: java.lang.ClassNotFoundException: com.ctc.wstx.io.InputBootstrapper at > java.net.URLClassLoader.findClass(URLClassLoader.java:382) at > java.lang.ClassLoader.loadClass(ClassLoader.java:424) at > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at > java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 1 moreException in > thread "main" java.lang.NoClassDefFoundError: > com/ctc/wstx/io/InputBootstrapper at > org.apache.hadoop.hbase.zookeeper.ZKServerTool.main(ZKServerTool.java:63)Caused > by: java.lang.ClassNotFoundException: com.ctc.wstx.io.InputBootstrapper at > java.net.URLClassLoader.findClass(URLClassLoader.java:382) at > java.lang.ClassLoader.loadClass(ClassLoader.java:424) at > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at > java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 1 more > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase
[ https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952715#comment-16952715 ] Hudson commented on HBASE-22514: Results for branch HBASE-22514 [build #150 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/150/]: (x) *{color:red}-1 overall{color}* details (if available): (x) {color:red}-1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/150//General_Nightly_Build_Report/] (x) {color:red}-1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/150//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/150//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (x) {color:red}-1 client integration test{color} --Failed when running client tests on top of Hadoop 2. [see log for details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/150//artifact/output-integration/hadoop-2.log]. (note that this means we didn't run on Hadoop 3) > Move rsgroup feature into core of HBase > --- > > Key: HBASE-22514 > URL: https://issues.apache.org/jira/browse/HBASE-22514 > Project: HBase > Issue Type: Umbrella > Components: Admin, Client, rsgroup >Reporter: Yechao Chen >Assignee: Duo Zhang >Priority: Major > Attachments: HBASE-22514.master.001.patch, > image-2019-05-31-18-25-38-217.png > > > The class RSGroupAdminClient is not public > we need to use java api RSGroupAdminClient to manager RSG > so RSGroupAdminClient should be public > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [hbase] Apache-HBase commented on issue #720: HBASE-22370 ByteBuf LEAK ERROR
Apache-HBase commented on issue #720: HBASE-22370 ByteBuf LEAK ERROR URL: https://github.com/apache/hbase/pull/720#issuecomment-542644720 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | :blue_heart: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | :green_heart: | dupname | 0m 0s | No case conflicting files found. | | :green_heart: | hbaseanti | 0m 0s | Patch does not have any anti-patterns. | | :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | :green_heart: | test4tests | 0m 0s | The patch appears to include 1 new or modified test files. | ||| _ master Compile Tests _ | | :green_heart: | mvninstall | 5m 57s | master passed | | :green_heart: | compile | 0m 58s | master passed | | :green_heart: | checkstyle | 1m 31s | master passed | | :green_heart: | shadedjars | 5m 2s | branch has no errors when building our shaded downstream artifacts. | | :green_heart: | javadoc | 0m 37s | master passed | | :blue_heart: | spotbugs | 4m 27s | Used deprecated FindBugs config; considering switching to SpotBugs. | | :green_heart: | findbugs | 4m 24s | master passed | ||| _ Patch Compile Tests _ | | :green_heart: | mvninstall | 5m 42s | the patch passed | | :green_heart: | compile | 0m 58s | the patch passed | | :green_heart: | javac | 0m 58s | the patch passed | | :green_heart: | checkstyle | 1m 32s | the patch passed | | :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | :green_heart: | shadedjars | 5m 1s | patch has no errors when building our shaded downstream artifacts. | | :green_heart: | hadoopcheck | 17m 26s | Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. | | :green_heart: | javadoc | 0m 34s | the patch passed | | :green_heart: | findbugs | 4m 38s | the patch passed | ||| _ Other Tests _ | | :green_heart: | unit | 241m 12s | hbase-server in the patch passed. | | :green_heart: | asflicense | 0m 28s | The patch does not generate ASF License warnings. | | | | 303m 45s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-720/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hbase/pull/720 | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux 919a22e4bb24 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | /home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-720/out/precommit/personality/provided.sh | | git revision | master / 7924ba39e7 | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-720/3/testReport/ | | Max. process+thread count | 4644 (vs. ulimit of 1) | | modules | C: hbase-server U: hbase-server | | Console output | https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-720/3/console | | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 | | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[jira] [Commented] (HBASE-23060) Allow config rs group for a region
[ https://issues.apache.org/jira/browse/HBASE-23060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952705#comment-16952705 ] Xiaolin Ha commented on HBASE-23060: I can work on this issue if we are sure to implement like this. > Allow config rs group for a region > -- > > Key: HBASE-23060 > URL: https://issues.apache.org/jira/browse/HBASE-23060 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Priority: Major > > Sometimes we only want to separate a region which is the hotspot, so it will > be good if we can config rs group for a region. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23093) Avoid Optional Anti-Pattern where possible
[ https://issues.apache.org/jira/browse/HBASE-23093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23093: -- Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Avoid Optional Anti-Pattern where possible > -- > > Key: HBASE-23093 > URL: https://issues.apache.org/jira/browse/HBASE-23093 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.3.0, 1.6.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > > Optional should be used as a return type only. It's a neat solution for > handling data that might not be present. We should avoid using Optional > Anti-Patterns i.e. using it as a field or parameter type due to these reasons: > 1. Using Optional parameters causing conditional logic inside the methods is > not productive. > 2. Packing an argument in an Optional is suboptimal for the compiler and does > an unnecessary wrapping. > 3. Optional field is not serializable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-23093) Avoid Optional Anti-Pattern where possible
[ https://issues.apache.org/jira/browse/HBASE-23093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-23093: -- Fix Version/s: (was: 1.6.0) 2.1.8 2.2.2 > Avoid Optional Anti-Pattern where possible > -- > > Key: HBASE-23093 > URL: https://issues.apache.org/jira/browse/HBASE-23093 > Project: HBase > Issue Type: Improvement >Affects Versions: 3.0.0, 2.3.0, 1.6.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Minor > Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8 > > > Optional should be used as a return type only. It's a neat solution for > handling data that might not be present. We should avoid using Optional > Anti-Patterns i.e. using it as a field or parameter type due to these reasons: > 1. Using Optional parameters causing conditional logic inside the methods is > not productive. > 2. Packing an argument in an Optional is suboptimal for the compiler and does > an unnecessary wrapping. > 3. Optional field is not serializable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HBASE-22846) Internal Error 500 when Using HBASE REST API to Create Namespace.
[ https://issues.apache.org/jira/browse/HBASE-22846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22846: -- Fix Version/s: (was: 2.2.1) 2.2.2 > Internal Error 500 when Using HBASE REST API to Create Namespace. > - > > Key: HBASE-22846 > URL: https://issues.apache.org/jira/browse/HBASE-22846 > Project: HBase > Issue Type: Improvement > Components: hbase-connectors >Affects Versions: 3.0.0, 2.2.0, 2.1.1 >Reporter: Sailesh Patel >Assignee: Wellington Chevreuil >Priority: Major > Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2 > > > When trying to the following URL to create namespace: > Secured cluster: curl --negotiate -u : -i -k -vi -X POST > "http://HBASE_REST_API_HOST:20550/namespaces/datasparktest"; > UnSecured cluster: curl -vi -X POST > "http://HBASE_REST_API_HOST:20550/namespaces/datasparktest"; > The following is returned on the console: > HTTP/1.1 500 Request failed. > The Error in Hbase REST Server log is: > 2019-08-13 15:44:55,080 WARN org.eclipse.jetty.servlet.ServletHandler: > javax.servlet.ServletException: java.lang.NullPointerException > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489) > at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427) > ... > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hbase.rest.NamespacesInstanceResource.createOrUpdate(NamespacesInstanceResource.java:250) > at > org.apache.hadoop.hbase.rest.NamespacesInstanceResource.processUpdate(NamespacesInstanceResource.java:243) > at > org.apache.hadoop.hbase.rest.NamespacesInstanceResource.post(NamespacesInstanceResource.java:183) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161) > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-20626) Change the value of "Requests Per Second" on WEBUI
[ https://issues.apache.org/jira/browse/HBASE-20626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952693#comment-16952693 ] HBase QA commented on HBASE-20626: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 0s{color} | {color:green} Patch does not have any anti-patterns. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange} 0m 0s{color} | {color:orange} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 35s{color} | {color:green} branch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 4m 6s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 4m 31s{color} | {color:green} patch has no errors when building our shaded downstream artifacts. {color} | | {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 15m 32s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 2.9.2 or 3.1.2. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green}163m 45s{color} | {color:green} hbase-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}220m 1s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/PreCommit-HBASE-Build/958/artifact/patchprocess/Dockerfile | | JIRA Issue | HBASE-20626 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12983127/HBASE-20626.master.003.patch | | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile | | uname | Linux f0de2a476122 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/hbase-personality.sh | | git revision | master / 7924ba39e7 | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/PreCommit-HBASE-Build/9
[jira] [Updated] (HBASE-22013) SpaceQuotas - getNumRegions() returning wrong number of regions due to region replicas
[ https://issues.apache.org/jira/browse/HBASE-22013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HBASE-22013: -- Fix Version/s: (was: 2.2.1) 2.2.2 > SpaceQuotas - getNumRegions() returning wrong number of regions due to region > replicas > -- > > Key: HBASE-22013 > URL: https://issues.apache.org/jira/browse/HBASE-22013 > Project: HBase > Issue Type: Bug >Reporter: Ajeet Rai >Assignee: Shardul Singh >Priority: Major > Labels: Quota, Space > Fix For: 3.0.0, 2.3.0, 2.1.7, 2.2.2 > > Attachments: HBASE-22013.branch-2.1.001.patch, > HBASE-22013.master.001.patch, HBASE-22013.master.002.patch, > HBASE-22013.master.003.patch, hbase-22013.branch-2.001.patch, > hbase-22013.branch-2.2.001.patch > > > Space Quota: Space Quota Issue: If a table is created with region replica > then quota calculation is not happening > Steps: > 1: Create a table with 100 regions with region replica 3 > 2: Observe that 'hbase:quota' table doesn't have entry of usage for this > table So In UI only policy Limit and Policy is shown but not Usage and State. > Reason: > It looks like File system utilization core is sending data of 100 reasons > but not the size of region replicas. > But in quota observer chore, it is considering total region(actual regions+ > replica reasons) > So the ratio of reported regions is less then configured > percentRegionsReportedThreshold. > SO quota calculation is not happening -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23055) Alter hbase:meta
[ https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952662#comment-16952662 ] Hudson commented on HBASE-23055: Results for branch HBASE-23055 [build #16 on builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/16/]: (x) *{color:red}-1 overall{color}* details (if available): (/) {color:green}+1 general checks{color} -- For more information [see general report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/16//General_Nightly_Build_Report/] (/) {color:green}+1 jdk8 hadoop2 checks{color} -- For more information [see jdk8 (hadoop2) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/16//JDK8_Nightly_Build_Report_(Hadoop2)/] (x) {color:red}-1 jdk8 hadoop3 checks{color} -- For more information [see jdk8 (hadoop3) report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/16//JDK8_Nightly_Build_Report_(Hadoop3)/] (/) {color:green}+1 source release artifact{color} -- See build output for details. (/) {color:green}+1 client integration test{color} > Alter hbase:meta > > > Key: HBASE-23055 > URL: https://issues.apache.org/jira/browse/HBASE-23055 > Project: HBase > Issue Type: Task >Reporter: Michael Stack >Assignee: Michael Stack >Priority: Major > Fix For: 3.0.0 > > > hbase:meta is currently hardcoded. Its schema cannot be change. > This issue is about allowing edits to hbase:meta schema. It will allow our > being able to set encodings such as the block-with-indexes which will help > quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first > step on road to being able to split meta. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HBASE-23060) Allow config rs group for a region
[ https://issues.apache.org/jira/browse/HBASE-23060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16952649#comment-16952649 ] Duo Zhang commented on HBASE-23060: --- The region name will be changed after the region is split, but the range is more stable. > Allow config rs group for a region > -- > > Key: HBASE-23060 > URL: https://issues.apache.org/jira/browse/HBASE-23060 > Project: HBase > Issue Type: Sub-task >Reporter: Duo Zhang >Priority: Major > > Sometimes we only want to separate a region which is the hotspot, so it will > be good if we can config rs group for a region. -- This message was sent by Atlassian Jira (v8.3.4#803005)