[GitHub] [hbase] Apache-HBase commented on issue #731: HBASE-23185 Fix high cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread GitBox
Apache-HBase commented on issue #731: HBASE-23185 Fix high cpu usage because 
getTable()#put() gets config value every time
URL: https://github.com/apache/hbase/pull/731#issuecomment-543519555
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 40s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | :green_heart: |  mvninstall  |   8m 37s |  branch-1 passed  |
   | :green_heart: |  compile  |   0m 19s |  branch-1 passed with JDK 
v1.8.0_232  |
   | :green_heart: |  compile  |   0m 23s |  branch-1 passed with JDK 
v1.7.0_242  |
   | :green_heart: |  checkstyle  |   0m 41s |  branch-1 passed  |
   | :green_heart: |  shadedjars  |   2m 54s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 26s |  branch-1 passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javadoc  |   0m 27s |  branch-1 passed with JDK 
v1.7.0_242  |
   | :blue_heart: |  spotbugs  |   1m 30s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   1m 28s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   1m 56s |  the patch passed  |
   | :green_heart: |  compile  |   0m 21s |  the patch passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javac  |   0m 21s |  the patch passed  |
   | :green_heart: |  compile  |   0m 24s |  the patch passed with JDK 
v1.7.0_242  |
   | :green_heart: |  javac  |   0m 24s |  the patch passed  |
   | :green_heart: |  checkstyle  |   0m 36s |  hbase-client: The patch 
generated 0 new + 142 unchanged - 1 fixed = 142 total (was 143)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   2m 45s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |   5m  0s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javadoc  |   0m 27s |  the patch passed with JDK 
v1.7.0_242  |
   | :green_heart: |  findbugs  |   1m 30s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   2m 39s |  hbase-client in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 17s |  The patch does not generate ASF 
License warnings.  |
   |  |   |  35m 37s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/731 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 64ed37c21f2a 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-731/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 924e0f7 |
   | Default Java | 1.7.0_242 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_232 
/usr/lib/jvm/zulu-7-amd64:1.7.0_242 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/2/testReport/
 |
   | Max. process+thread count | 148 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/2/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-19663) javadoc creation needs jsr305

2019-10-17 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19663:

Fix Version/s: 1.5.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> javadoc creation needs jsr305
> -
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11, 1.5.1
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-19663) javadoc creation needs jsr305

2019-10-17 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19663:

Priority: Major  (was: Blocker)

> javadoc creation needs jsr305
> -
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 1.4.11
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-19663) javadoc creation needs jsr305

2019-10-17 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19663:

Summary: javadoc creation needs jsr305  (was: site build fails complaining 
"javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname 
not found")

> javadoc creation needs jsr305
> -
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23172) HBase Canary region success count metrics reflect column family successes, not region successes

2019-10-17 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954264#comment-16954264
 ] 

Michael Stack commented on HBASE-23172:
---

+1



> HBase Canary region success count metrics reflect column family successes, 
> not region successes
> ---
>
> Key: HBASE-23172
> URL: https://issues.apache.org/jira/browse/HBASE-23172
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 3.0.0, 1.3.0, 1.4.0, 1.5.0, 2.0.0, 2.1.5, 2.2.1
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
> Attachments: HBASE-23172.branch-1.000.patch, 
> HBASE-23172.branch-2.000.patch, HBASE-23172.master.000.patch
>
>
> HBase Canary reads once per column family per region. The current "region 
> success count" should actually be "column family success count," which means 
> we need another metric that actually reflects region success count. 
> Additionally, the region read and write latencies only store the latencies of 
> the last column family of the region read. Instead of a map of regions to a 
> single latency value and success value, we should map each region to a list 
> of such values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #731: HBASE-23185 Fix high cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread GitBox
Apache-HBase commented on issue #731: HBASE-23185 Fix high cpu usage because 
getTable()#put() gets config value every time
URL: https://github.com/apache/hbase/pull/731#issuecomment-543503094
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 39s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ branch-1 Compile Tests _ |
   | :green_heart: |  mvninstall  |  10m 12s |  branch-1 passed  |
   | :green_heart: |  compile  |   0m 20s |  branch-1 passed with JDK 
v1.8.0_232  |
   | :green_heart: |  compile  |   0m 23s |  branch-1 passed with JDK 
v1.7.0_242  |
   | :green_heart: |  checkstyle  |   0m 43s |  branch-1 passed  |
   | :green_heart: |  shadedjars  |   2m 50s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 28s |  branch-1 passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javadoc  |   0m 25s |  branch-1 passed with JDK 
v1.7.0_242  |
   | :blue_heart: |  spotbugs  |   1m 31s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   1m 27s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | :green_heart: |  compile  |   0m 19s |  the patch passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javac  |   0m 19s |  the patch passed  |
   | :green_heart: |  compile  |   0m 23s |  the patch passed with JDK 
v1.7.0_242  |
   | :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   0m 38s |  hbase-client: The patch 
generated 7 new + 142 unchanged - 1 fixed = 149 total (was 143)  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   2m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |   5m  3s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
v1.8.0_232  |
   | :green_heart: |  javadoc  |   0m 25s |  the patch passed with JDK 
v1.7.0_242  |
   | :green_heart: |  findbugs  |   1m 31s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   2m 38s |  hbase-client in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 18s |  The patch does not generate ASF 
License warnings.  |
   |  |   |  37m 28s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/731 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux ba66d440b7da 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-731/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 425d84d |
   | Default Java | 1.7.0_242 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_232 
/usr/lib/jvm/zulu-7-amd64:1.7.0_242 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/1/artifact/out/diff-checkstyle-hbase-client.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/1/testReport/
 |
   | Max. process+thread count | 148 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-731/1/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread Shinya Yoshida (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinya Yoshida updated HBASE-23185:
---
Description: 
When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
 However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!

 

I confirmed branch-2 is not affected because client implementation has been 
changed dramatically.
  

  was:
When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
 However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
  


> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 

[jira] [Updated] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread Shinya Yoshida (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinya Yoshida updated HBASE-23185:
---
Description: 
When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
 However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!

 

Seems branch-2 is not affected because client implementation has been changed 
dramatically.
  

  was:
When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
 However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!

 

I confirmed branch-2 is not affected because client implementation has been 
changed dramatically.
  


> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods 

[GitHub] [hbase] bitterfox opened a new pull request #731: HBASE-23185 Fix high cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread GitBox
bitterfox opened a new pull request #731: HBASE-23185 Fix high cpu usage 
because getTable()#put() gets config value every time
URL: https://github.com/apache/hbase/pull/731
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread Shinya Yoshida (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinya Yoshida updated HBASE-23185:
---
Description: 
When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
 However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png|width=508,height=223!
  

  was:
When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png!
 


> High cpu usage because getTable()#put() gets config value every time
> 
>
> Key: HBASE-23185
> URL: https://issues.apache.org/jira/browse/HBASE-23185
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 1.5.0, 1.4.10, 1.2.12, 1.3.5
>Reporter: Shinya Yoshida
>Assignee: Shinya Yoshida
>Priority: Major
>  Labels: performance
> Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
> 2019-10-18 13-03-24.png
>
>
> When we analyzed the performance of our hbase application with many puts, we 
> found that Configuration methods use many CPU resources:
> !Screenshot from 2019-10-18 12-38-14.png|width=460,height=205!
> As you can see, getTable().put() is calling Configuration methods which cause 
> regex or synchronization by Hashtable.
> This should not happen in 0.99.2 because 
> https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
>  However, it's reproducing nowadays by bugs or leakages after many 

[jira] [Created] (HBASE-23185) High cpu usage because getTable()#put() gets config value every time

2019-10-17 Thread Shinya Yoshida (Jira)
Shinya Yoshida created HBASE-23185:
--

 Summary: High cpu usage because getTable()#put() gets config value 
every time
 Key: HBASE-23185
 URL: https://issues.apache.org/jira/browse/HBASE-23185
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.3.5, 1.2.12, 1.4.10, 1.5.0
Reporter: Shinya Yoshida
Assignee: Shinya Yoshida
 Attachments: Screenshot from 2019-10-18 12-38-14.png, Screenshot from 
2019-10-18 13-03-24.png

When we analyzed the performance of our hbase application with many puts, we 
found that Configuration methods use many CPU resources:

!Screenshot from 2019-10-18 12-38-14.png!

As you can see, getTable().put() is calling Configuration methods which cause 
regex or synchronization by Hashtable.

This should not happen in 0.99.2 because 
https://issues.apache.org/jira/browse/HBASE-12128 addressed such an issue.
However, it's reproducing nowadays by bugs or leakages after many code 
evoluations between 0.9x and 1.x.
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java#L369-L374]
 ** finishSetup is called every new HTable() e.g. every con.getTable()
 ** So getInt is called everytime and it does regex
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/BufferedMutatorImpl.java#L115]
 ** BufferedMutatorImpl is created every first put for HTable e.g. 
con.getTable().put()
 ** Create ConnectionConf every time in BufferedMutatorImpl constructor
 ** ConnectionConf gets config value in the constructor
 # 
[https://github.com/apache/hbase/blob/dd9eadb00f9dcd071a246482a11dfc7d63845f00/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java#L326]
 ** AsyncProcess is created in BufferedMutatorImpl constructor, so new 
AsyncProcess is created by con.getTable().put()
 ** AsyncProcess parse many configurations

So, con.getTable().put() is heavy operation for CPU because of getting config 
value.

 

With in-house patch for this issue, we observed about 10% improvement on 
max-throughput (e.g. CPU usage) at client-side:

!Screenshot from 2019-10-18 13-03-24.png!
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23184) The HeapAllocation in WebUI is not accurate

2019-10-17 Thread Zheng Hu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954242#comment-16954242
 ] 

Zheng Hu commented on HBASE-23184:
--

Seems the design that allocating heap buffer from the static HEAP for 
ByteBuffAllocator instance is not a good idea  ( at least for the heap related 
metrics) ? FYI  [~anoop.hbase].
I think it will be better if we just allocate the heap buffer from 
ByteBuffAllocator instance ( rather than the static HEAP),  then when 
considering the heap allocated metrics,  we just think about the 
ByteBuffAllocator instance..
 

> The HeapAllocation in WebUI is not accurate
> ---
>
> Key: HBASE-23184
> URL: https://issues.apache.org/jira/browse/HBASE-23184
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: chenxu
>Priority: Minor
>
> HeapAllocation in WebUI is always 0, the same reason as HBASE-22663



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23172) HBase Canary region success count metrics reflect column family successes, not region successes

2019-10-17 Thread Caroline (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caroline updated HBASE-23172:
-
Attachment: HBASE-23172.branch-1.000.patch
HBASE-23172.branch-2.000.patch
HBASE-23172.master.000.patch
Status: Patch Available  (was: Open)

> HBase Canary region success count metrics reflect column family successes, 
> not region successes
> ---
>
> Key: HBASE-23172
> URL: https://issues.apache.org/jira/browse/HBASE-23172
> Project: HBase
>  Issue Type: Improvement
>  Components: canary
>Affects Versions: 2.2.1, 2.1.5, 2.0.0, 1.5.0, 1.4.0, 1.3.0, 3.0.0
>Reporter: Caroline
>Assignee: Caroline
>Priority: Minor
> Attachments: HBASE-23172.branch-1.000.patch, 
> HBASE-23172.branch-2.000.patch, HBASE-23172.master.000.patch
>
>
> HBase Canary reads once per column family per region. The current "region 
> success count" should actually be "column family success count," which means 
> we need another metric that actually reflects region success count. 
> Additionally, the region read and write latencies only store the latencies of 
> the last column family of the region read. Instead of a map of regions to a 
> single latency value and success value, we should map each region to a list 
> of such values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] chenxu14 opened a new pull request #730: HBASE-23184 The HeapAllocation in WebUI is not accurate

2019-10-17 Thread GitBox
chenxu14 opened a new pull request #730: HBASE-23184 The HeapAllocation in 
WebUI is not accurate
URL: https://github.com/apache/hbase/pull/730
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23184) The HeapAllocation in WebUI is not accurate

2019-10-17 Thread chenxu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954222#comment-16954222
 ] 

chenxu commented on HBASE-23184:


FYI [~openinx]

> The HeapAllocation in WebUI is not accurate
> ---
>
> Key: HBASE-23184
> URL: https://issues.apache.org/jira/browse/HBASE-23184
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: chenxu
>Priority: Minor
>
> HeapAllocation in WebUI is always 0, the same reason as HBASE-22663



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23184) The HeapAllocation in WebUI is not accurate

2019-10-17 Thread chenxu (Jira)
chenxu created HBASE-23184:
--

 Summary: The HeapAllocation in WebUI is not accurate
 Key: HBASE-23184
 URL: https://issues.apache.org/jira/browse/HBASE-23184
 Project: HBase
  Issue Type: Bug
  Components: UI
Reporter: chenxu


HeapAllocation in WebUI is always 0, the same reason as HBASE-22663



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-17 Thread Yi Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei resolved HBASE-23170.

Fix Version/s: 2.2.3
   2.3.0
   3.0.0
   Resolution: Fixed

> Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
> -
>
> Key: HBASE-23170
> URL: https://issues.apache.org/jira/browse/HBASE-23170
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Admin#getRegionServers returns the server names.
> ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and 
> metrics, while the metrics are not useful for Admin#getRegionServers method.
> Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] 
> for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23170) Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-17 Thread Yi Mei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954200#comment-16954200
 ] 

Yi Mei commented on HBASE-23170:


Pushed to master, branch-2, branch-2.2. Thanks for [~anoop.hbase] [~zhangduo] 
[~zghao] for reviewing.

> Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
> -
>
> Key: HBASE-23170
> URL: https://issues.apache.org/jira/browse/HBASE-23170
> Project: HBase
>  Issue Type: Improvement
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
>
> Admin#getRegionServers returns the server names.
> ClusterMetrics.Option.LIVE_SERVERS returns the map of server names and 
> metrics, while the metrics are not useful for Admin#getRegionServers method.
> Please see [HBASE-21938|https://issues.apache.org/jira/browse/HBASE-21938] 
> for more details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23148) Please upload the Hbase connector jar to the Maven central repository

2019-10-17 Thread Zhaoyang Qin (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954196#comment-16954196
 ] 

Zhaoyang Qin commented on HBASE-23148:
--



 org.apache.hbase.connectors.spark
 hbase-spark
 1.0.0


I've tested it in demo project, and it works.

> Please upload the Hbase connector jar  to the Maven central repository
> --
>
> Key: HBASE-23148
> URL: https://issues.apache.org/jira/browse/HBASE-23148
> Project: HBase
>  Issue Type: Wish
>  Components: hbase-connectors
>Affects Versions: connector-1.0.0
> Environment: Spark version: 2.3.x 2.4.x
> Scala version: 2.11.8
> hbase-spark version: 1.0.0
>Reporter: Zhaoyang Qin
>Priority: Minor
> Attachments: hbase-spark.png
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Can HBase officially upload the jar package of hbase-spark to the Maven 
> central library? Now developers need to compile and install it before using 
> it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] mymeiyi merged pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-17 Thread GitBox
mymeiyi merged pull request #721: HBASE-23170 Admin#getRegionServers use 
ClusterMetrics.Option.SERVERS_NAME
URL: https://github.com/apache/hbase/pull/721
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954174#comment-16954174
 ] 

Hudson commented on HBASE-23177:


Results for branch branch-2.2
[build #665 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/665/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/665//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/665//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/665//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954172#comment-16954172
 ] 

Hudson commented on HBASE-23107:


Results for branch branch-2
[build #2326 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2326/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2326//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2326//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2326//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Avoid temp byte array creation when doing cacheDataOnWrite
> --
>
> Key: HBASE-23107
> URL: https://issues.apache.org/jira/browse/HBASE-23107
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache, HFile
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>  Labels: gc
> Fix For: 3.0.0, 2.3.0
>
> Attachments: flamegraph_after.svg, flamegraph_before.svg
>
>
> code in HFileBlock.Writer.cloneUncompressedBufferWithHeader
> {code:java}
> ByteBuffer cloneUncompressedBufferWithHeader() {
>   expectState(State.BLOCK_READY);
>   byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   …
> }
> {code}
> When cacheOnWrite feature enabled, a temp byte array was created in order to 
> copy block’s data, we can avoid this by use of ByteBuffAllocator. This can 
> improve GC performance in write heavy scenarios.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22749) Distributed MOB compactions

2019-10-17 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954167#comment-16954167
 ] 

HBase QA commented on HBASE-22749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
38s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
36s{color} | {color:green} master passed {color} |
| {color:orange}-0{color} | {color:orange} patch {color} | {color:orange}  4m 
44s{color} | {color:orange} Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
33s{color} | {color:red} hbase-server: The patch generated 17 new + 308 
unchanged - 47 fixed = 325 total (was 355) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
36s{color} | {color:red} hbase-server generated 3 new + 0 unchanged - 0 fixed = 
3 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}237m  6s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}300m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  org.apache.hadoop.hbase.mob.FileSelection defines 
compareTo(FileSelection) and uses Object.equals()  At 
DefaultMobStoreCompactor.java:Object.equals()  At 
DefaultMobStoreCompactor.java:[lines 615-620] |
|  |  org.apache.hadoop.hbase.mob.Generation defines compareTo(Generation) and 
uses Object.equals()  At DefaultMobStoreCompactor.java:Object.equals()  At 
DefaultMobStoreCompactor.java:[lines 793-798] |
|  |  

[GitHub] [hbase] Apache-HBase commented on issue #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
Apache-HBase commented on issue #623: HBASE-22749: Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#issuecomment-543415584
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 10 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m  3s |  master passed  |
   | :green_heart: |  compile  |   0m 57s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 36s |  master passed  |
   | :green_heart: |  shadedjars  |   5m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 38s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m 36s |  master passed  |
   | :yellow_heart: |  patch  |   4m 44s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 29s |  the patch passed  |
   | :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | :broken_heart: |  checkstyle  |   1m 33s |  hbase-server: The patch 
generated 17 new + 308 unchanged - 47 fixed = 325 total (was 355)  |
   | :broken_heart: |  whitespace  |   0m  0s |  The patch has 12 line(s) that 
end in whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | :green_heart: |  shadedjars  |   5m  3s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  17m 27s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   | :broken_heart: |  findbugs  |   4m 36s |  hbase-server generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0)  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 237m  6s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 29s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 300m 25s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  org.apache.hadoop.hbase.mob.FileSelection defines 
compareTo(FileSelection) and uses Object.equals()  At 
DefaultMobStoreCompactor.java:Object.equals()  At 
DefaultMobStoreCompactor.java:[lines 615-620] |
   |  |  org.apache.hadoop.hbase.mob.Generation defines compareTo(Generation) 
and uses Object.equals()  At DefaultMobStoreCompactor.java:Object.equals()  At 
DefaultMobStoreCompactor.java:[lines 793-798] |
   |  |  Unused field:DefaultMobStoreCompactor.java |
   | Failed junit tests | hadoop.hbase.client.TestAsyncRegionAdminApi |
   |   | hadoop.hbase.mob.TestMobCompaction |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/623 |
   | JIRA Issue | HBASE-22749 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux cf8429827848 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-623/out/precommit/personality/provided.sh
 |
   | git revision | master / 0f910f0c32 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/4/artifact/out/diff-checkstyle-hbase-server.txt
 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/4/artifact/out/whitespace-eol.txt
 |
   | findbugs | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/4/artifact/out/new-findbugs-hbase-server.html
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/4/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/4/testReport/
 |
   | Max. process+thread count | 5193 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336273716
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
 
 Review comment:
   It turned out that MOB_CELLS_COUNT is irrelevant here because it is the meta 
attribute of MOB file only and here we go through regular store files. I 
changed the logic around this and how we store MOB_FILE_REFS during flush and 
compactions. The old code ignored this attribute if number of references was 0. 
The new code always stores this meta key, even if number of references is 0, 
thus giving us  a way to distinguish that the store file was created by new 
code. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336258209
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
 
 Review comment:
   Sure. The important thing is if the list might push us to OOM that we manage 
to log a warning that points an operator in the right direction 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336255785
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -183,105 +270,166 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 boolean hasMore;
 Path path = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
 byte[] fileName = null;
-StoreFileWriter mobFileWriter = null, delFileWriter = null;
-long mobCells = 0, deleteMarkersCount = 0;
+StoreFileWriter mobFileWriter = null;
+long mobCells = 0;
 long cellsCountCompactedToMob = 0, cellsCountCompactedFromMob = 0;
 long cellsSizeCompactedToMob = 0, cellsSizeCompactedFromMob = 0;
 boolean finished = false;
+
 ScannerContext scannerContext =
 ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();
 throughputController.start(compactionName);
-KeyValueScanner kvs = (scanner instanceof KeyValueScanner)? 
(KeyValueScanner)scanner : null;
-long shippedCallSizeLimit = (long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+KeyValueScanner kvs = (scanner instanceof KeyValueScanner) ? 
(KeyValueScanner) scanner : null;
+long shippedCallSizeLimit =
+(long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+
+MobCell mobCell = null;
 try {
   try {
 // If the mob file writer could not be created, directly write the 
cell to the store file.
 mobFileWriter = mobStore.createWriterInTmp(new Date(fd.latestPutTs), 
fd.maxKeyCount,
   compactionCompression, store.getRegionInfo().getStartKey(), true);
 fileName = Bytes.toBytes(mobFileWriter.getPath().getName());
   } catch (IOException e) {
-LOG.warn("Failed to create mob writer, "
-   + "we will continue the compaction by writing MOB cells 
directly in store files", e);
+// Bailing out
 
 Review comment:
   That's true. For us to make progress in the old approach we'd need to have 
some kind of odd configuration, like different quotas or perms on the mobdir 
compared to active. I can see how drawing attention while staying in a holding 
pattern has advantages over bouncing  bunch of data around.
   
   We should make sure to release note this behavior change.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336255785
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -183,105 +270,166 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 boolean hasMore;
 Path path = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
 byte[] fileName = null;
-StoreFileWriter mobFileWriter = null, delFileWriter = null;
-long mobCells = 0, deleteMarkersCount = 0;
+StoreFileWriter mobFileWriter = null;
+long mobCells = 0;
 long cellsCountCompactedToMob = 0, cellsCountCompactedFromMob = 0;
 long cellsSizeCompactedToMob = 0, cellsSizeCompactedFromMob = 0;
 boolean finished = false;
+
 ScannerContext scannerContext =
 ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();
 throughputController.start(compactionName);
-KeyValueScanner kvs = (scanner instanceof KeyValueScanner)? 
(KeyValueScanner)scanner : null;
-long shippedCallSizeLimit = (long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+KeyValueScanner kvs = (scanner instanceof KeyValueScanner) ? 
(KeyValueScanner) scanner : null;
+long shippedCallSizeLimit =
+(long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+
+MobCell mobCell = null;
 try {
   try {
 // If the mob file writer could not be created, directly write the 
cell to the store file.
 mobFileWriter = mobStore.createWriterInTmp(new Date(fd.latestPutTs), 
fd.maxKeyCount,
   compactionCompression, store.getRegionInfo().getStartKey(), true);
 fileName = Bytes.toBytes(mobFileWriter.getPath().getName());
   } catch (IOException e) {
-LOG.warn("Failed to create mob writer, "
-   + "we will continue the compaction by writing MOB cells 
directly in store files", e);
+// Bailing out
 
 Review comment:
   That's true. For us to make progress in the old approach we'd need to have 
some kind of odd configuration, like different quotas or perms on the mobdir 
compared to active. I can see how drawing attention while staying in a holding 
pattern has advantages over bouncing  bunch of data around.
   
   We should make sure to release note this behavior change.
   
   We should make sure 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336216181
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
+  LOG.info("Found old store file with no MOB_FILE_REFS: " + pp
++" - can not proceed until all old files will be 
MOB-compacted");
+  return;
+} else if (mobRefData == null) {
+  LOG.info("Skipping file without MOB references (can be 
bulkloaded file):"+ pp);
+  continue;
+}
+String[] mobs = new String(mobRefData).split(",");
+regionMobs.addAll(Arrays.asList(mobs));
+  }
+} catch (FileNotFoundException e) {
 
 Review comment:
   Yes, still need to analyze if it is a potential source of a data loss. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336214069
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
+  LOG.info("Found old store file with no MOB_FILE_REFS: " + pp
++" - can not proceed until all old files will be 
MOB-compacted");
+  return;
+} else if (mobRefData == null) {
+  LOG.info("Skipping file without MOB references (can be 
bulkloaded file):"+ pp);
+  continue;
+}
+String[] mobs = new String(mobRefData).split(",");
+regionMobs.addAll(Arrays.asList(mobs));
+  }
+} catch (FileNotFoundException e) {
+  //TODO
+  LOG.warn(e.getMessage());
+  continue;
+}
+succeed = true;
+  }
+  // Add MOB refs for current region/family
+  allActiveMobFileName.addAll(regionMobs);
+} // END column families
+  }//END regions
+
+  // Now scan MOB directories and find MOB files with no references to them
+  long now = System.currentTimeMillis();
+  long minAgeToArchive = 
conf.getLong(MobConstants.MOB_MINIMUM_FILE_AGE_TO_ARCHIVE_KEY,
+  
MobConstants.DEFAULT_MOB_MINIMUM_FILE_AGE_TO_ARCHIVE);
+  for (ColumnFamilyDescriptor hcd: list) {
+  List toArchive = new ArrayList();
+  String family = hcd.getNameAsString();
+  Path dir = getMobFamilyPath(conf, table, family);
+  RemoteIterator rit = fs.listLocatedStatus(dir);
+  while(rit.hasNext()) {
+LocatedFileStatus lfs = rit.next();
+Path p = lfs.getPath();
+if (!allActiveMobFileName.contains(p.getName())) {
 
 Review comment:
   We  do not have _del files anymore. All deletes are handled in a usual way 
during normal major compactions of a regular store files. This code just checks 
if MOB file is in a active set, if - not, then it is eligible for 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336214477
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
+  LOG.info("Found old store file with no MOB_FILE_REFS: " + pp
++" - can not proceed until all old files will be 
MOB-compacted");
+  return;
+} else if (mobRefData == null) {
+  LOG.info("Skipping file without MOB references (can be 
bulkloaded file):"+ pp);
+  continue;
+}
+String[] mobs = new String(mobRefData).split(",");
+regionMobs.addAll(Arrays.asList(mobs));
+  }
+} catch (FileNotFoundException e) {
+  //TODO
+  LOG.warn(e.getMessage());
+  continue;
+}
+succeed = true;
+  }
+  // Add MOB refs for current region/family
+  allActiveMobFileName.addAll(regionMobs);
+} // END column families
+  }//END regions
+
+  // Now scan MOB directories and find MOB files with no references to them
+  long now = System.currentTimeMillis();
+  long minAgeToArchive = 
conf.getLong(MobConstants.MOB_MINIMUM_FILE_AGE_TO_ARCHIVE_KEY,
+  
MobConstants.DEFAULT_MOB_MINIMUM_FILE_AGE_TO_ARCHIVE);
+  for (ColumnFamilyDescriptor hcd: list) {
+  List toArchive = new ArrayList();
+  String family = hcd.getNameAsString();
+  Path dir = getMobFamilyPath(conf, table, family);
+  RemoteIterator rit = fs.listLocatedStatus(dir);
+  while(rit.hasNext()) {
+LocatedFileStatus lfs = rit.next();
+Path p = lfs.getPath();
+if (!allActiveMobFileName.contains(p.getName())) {
 
 Review comment:
   This code has been moved to MobFileCleaningChore class as per your request. 
It is no longer in MobUtils 


This is an automated message from the Apache Git 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336206677
 
 

 ##
 File path: hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
 ##
 @@ -907,6 +789,143 @@ public static boolean hasMobColumns(TableDescriptor htd) 
{
 return false;
   }
 
+  /**
+   * Get list of Mob column families (if any exists)
+   * @param htd table descriptor
+   * @return list of Mob column families
+   */
+  public static List 
getMobColumnFamilies(TableDescriptor htd){
+
+List fams = new 
ArrayList();
+ColumnFamilyDescriptor[] hcds = htd.getColumnFamilies();
+for (ColumnFamilyDescriptor hcd : hcds) {
+  if (hcd.isMobEnabled()) {
+fams.add(hcd);
+  }
+}
+return fams;
+  }
+
+  /**
+   * Performs housekeeping file cleaning (called by MOB Cleaner chore)
+   * @param conf configuration
+   * @param table table name
+   * @throws IOException
+   */
+  public static void cleanupObsoleteMobFiles(Configuration conf, TableName 
table)
+  throws IOException {
+
+try (final Connection conn = ConnectionFactory.createConnection(conf);
+final Admin admin = conn.getAdmin();) {
+  TableDescriptor htd = admin.getDescriptor(table);
+  List list = getMobColumnFamilies(htd);
+  if (list.size() == 0) {
+LOG.info("Skipping non-MOB table [" + table + "]");
+return;
+  }
+  Path rootDir = FSUtils.getRootDir(conf);
+  Path tableDir = FSUtils.getTableDir(rootDir, table);
+  // How safe is this call?
+  List regionDirs = FSUtils.getRegionDirs(FileSystem.get(conf), 
tableDir);
+
+  Set allActiveMobFileName = new HashSet();
+  FileSystem fs = FileSystem.get(conf);
+  for (Path regionPath: regionDirs) {
+for (ColumnFamilyDescriptor hcd: list) {
+  String family = hcd.getNameAsString();
+  Path storePath = new Path(regionPath, family);
+  boolean succeed = false;
+  Set regionMobs = new HashSet();
+  while(!succeed) {
+//TODO handle FNFE
+RemoteIterator rit = 
fs.listLocatedStatus(storePath);
+List storeFiles = new ArrayList();
+// Load list of store files first
+while(rit.hasNext()) {
+  Path p = rit.next().getPath();
+  if (fs.isFile(p)) {
+storeFiles.add(p);
+  }
+}
+try {
+  for(Path pp: storeFiles) {
+HStoreFile sf = new HStoreFile(fs, pp, conf, 
CacheConfig.DISABLED,
+  BloomType.NONE, true);
+sf.initReader();
+byte[] mobRefData = 
sf.getMetadataValue(HStoreFile.MOB_FILE_REFS);
+byte[] mobCellCountData = 
sf.getMetadataValue(HStoreFile.MOB_CELLS_COUNT);
+byte[] bulkloadMarkerData = 
sf.getMetadataValue(HStoreFile.BULKLOAD_TASK_KEY);
+if (mobRefData == null && (mobCellCountData != null ||
+bulkloadMarkerData == null)) {
+  LOG.info("Found old store file with no MOB_FILE_REFS: " + pp
++" - can not proceed until all old files will be 
MOB-compacted");
+  return;
+} else if (mobRefData == null) {
+  LOG.info("Skipping file without MOB references (can be 
bulkloaded file):"+ pp);
+  continue;
+}
+String[] mobs = new String(mobRefData).split(",");
+regionMobs.addAll(Arrays.asList(mobs));
+  }
+} catch (FileNotFoundException e) {
+  //TODO
+  LOG.warn(e.getMessage());
+  continue;
+}
+succeed = true;
+  }
+  // Add MOB refs for current region/family
+  allActiveMobFileName.addAll(regionMobs);
+} // END column families
+  }//END regions
+
+  // Now scan MOB directories and find MOB files with no references to them
+  long now = System.currentTimeMillis();
+  long minAgeToArchive = 
conf.getLong(MobConstants.MOB_MINIMUM_FILE_AGE_TO_ARCHIVE_KEY,
+  
MobConstants.DEFAULT_MOB_MINIMUM_FILE_AGE_TO_ARCHIVE);
+  for (ColumnFamilyDescriptor hcd: list) {
+  List toArchive = new ArrayList();
+  String family = hcd.getNameAsString();
+  Path dir = getMobFamilyPath(conf, table, family);
+  RemoteIterator rit = fs.listLocatedStatus(dir);
+  while(rit.hasNext()) {
+LocatedFileStatus lfs = rit.next();
+Path p = lfs.getPath();
+if (!allActiveMobFileName.contains(p.getName())) {
+  // MOB is not in a list of active references, but it can be too
+  // fresh, skip it in this case
+  /*DEBUG*/ LOG.debug(" Age=" + (now - 
fs.getFileStatus(p).getModificationTime()) +
+" MOB 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336205382
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java
 ##
 @@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.backup.HFileArchiver;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.master.locking.LockManager;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.procedure2.LockType;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The Class ExpiredMobFileCleanerChore for running cleaner regularly to 
remove the expired
+ * mob files.
+ */
+@InterfaceAudience.Private
+public class MobFileCleanerChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCleanerChore.class);
+  private final HMaster master;
+  private ExpiredMobFileCleaner cleaner;
+
+  public MobFileCleanerChore(HMaster master) {
+super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, 
master.getConfiguration()
+  .getInt(MobConstants.MOB_CLEANER_PERIOD, 
MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master
+  .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD,
+MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS);
+this.master = master;
+cleaner = new ExpiredMobFileCleaner();
+cleaner.setConf(master.getConfiguration());
+  }
+
+  @VisibleForTesting
+  public MobFileCleanerChore() {
+this.master = null;
+  }
+  
+  @Override
+  
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION",
+justification="Intentional")
+
+  protected void chore() {
+try {
+
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  if (hcd.isMobEnabled() && hcd.getMinVersions() == 0) {
+// clean only for mob-enabled column.
+// obtain a read table lock before cleaning, synchronize with 
MobFileCompactionChore.
+final LockManager.MasterLock lock = 
master.getLockManager().createMasterLock(
+MobUtils.getTableLockName(htd.getTableName()), LockType.SHARED,
+this.getClass().getSimpleName() + ": Cleaning expired mob 
files");
+try {
+  lock.acquire();
+  
cleaner.cleanExpiredMobFiles(htd.getTableName().getNameAsString(), hcd);
+} finally {
+  lock.release();
+}
+  }
+}
+// Now clean obsolete files for a table
+

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336205167
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java
 ##
 @@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.backup.HFileArchiver;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.master.locking.LockManager;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.procedure2.LockType;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The Class ExpiredMobFileCleanerChore for running cleaner regularly to 
remove the expired
+ * mob files.
+ */
+@InterfaceAudience.Private
+public class MobFileCleanerChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCleanerChore.class);
+  private final HMaster master;
+  private ExpiredMobFileCleaner cleaner;
+
+  public MobFileCleanerChore(HMaster master) {
+super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, 
master.getConfiguration()
+  .getInt(MobConstants.MOB_CLEANER_PERIOD, 
MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master
+  .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD,
+MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS);
+this.master = master;
+cleaner = new ExpiredMobFileCleaner();
+cleaner.setConf(master.getConfiguration());
+  }
+
+  @VisibleForTesting
+  public MobFileCleanerChore() {
+this.master = null;
+  }
+  
+  @Override
+  
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION",
+justification="Intentional")
+
+  protected void chore() {
+try {
+
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  if (hcd.isMobEnabled() && hcd.getMinVersions() == 0) {
+// clean only for mob-enabled column.
+// obtain a read table lock before cleaning, synchronize with 
MobFileCompactionChore.
+final LockManager.MasterLock lock = 
master.getLockManager().createMasterLock(
+MobUtils.getTableLockName(htd.getTableName()), LockType.SHARED,
+this.getClass().getSimpleName() + ": Cleaning expired mob 
files");
+try {
+  lock.acquire();
+  
cleaner.cleanExpiredMobFiles(htd.getTableName().getNameAsString(), hcd);
+} finally {
+  lock.release();
+}
+  }
+}
+// Now clean obsolete files for a table
+

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336202612
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreCompactor.java
 ##
 @@ -183,105 +270,166 @@ protected boolean performCompaction(FileDetails fd, 
InternalScanner scanner, Cel
 boolean hasMore;
 Path path = MobUtils.getMobFamilyPath(conf, store.getTableName(), 
store.getColumnFamilyName());
 byte[] fileName = null;
-StoreFileWriter mobFileWriter = null, delFileWriter = null;
-long mobCells = 0, deleteMarkersCount = 0;
+StoreFileWriter mobFileWriter = null;
+long mobCells = 0;
 long cellsCountCompactedToMob = 0, cellsCountCompactedFromMob = 0;
 long cellsSizeCompactedToMob = 0, cellsSizeCompactedFromMob = 0;
 boolean finished = false;
+
 ScannerContext scannerContext =
 ScannerContext.newBuilder().setBatchLimit(compactionKVMax).build();
 throughputController.start(compactionName);
-KeyValueScanner kvs = (scanner instanceof KeyValueScanner)? 
(KeyValueScanner)scanner : null;
-long shippedCallSizeLimit = (long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+KeyValueScanner kvs = (scanner instanceof KeyValueScanner) ? 
(KeyValueScanner) scanner : null;
+long shippedCallSizeLimit =
+(long) numofFilesToCompact * 
this.store.getColumnFamilyDescriptor().getBlocksize();
+
+MobCell mobCell = null;
 try {
   try {
 // If the mob file writer could not be created, directly write the 
cell to the store file.
 mobFileWriter = mobStore.createWriterInTmp(new Date(fd.latestPutTs), 
fd.maxKeyCount,
   compactionCompression, store.getRegionInfo().getStartKey(), true);
 fileName = Bytes.toBytes(mobFileWriter.getPath().getName());
   } catch (IOException e) {
-LOG.warn("Failed to create mob writer, "
-   + "we will continue the compaction by writing MOB cells 
directly in store files", e);
+// Bailing out
 
 Review comment:
   If we can't create writer something is wrong in the system (HDFS, for 
example), so what is the point to continue compaction in this case? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
busbey commented on a change in pull request #623: HBASE-22749: Distributed MOB 
compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336202017
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCleanerChore.java
 ##
 @@ -0,0 +1,260 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hbase.master;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.LocatedFileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RemoteIterator;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.backup.HFileArchiver;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.master.locking.LockManager;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner;
+import org.apache.hadoop.hbase.mob.MobUtils;
+import org.apache.hadoop.hbase.procedure2.LockType;
+import org.apache.hadoop.hbase.regionserver.BloomType;
+import org.apache.hadoop.hbase.regionserver.HStoreFile;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.hbase.util.HFileArchiveUtil;
+import 
org.apache.hbase.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * The Class ExpiredMobFileCleanerChore for running cleaner regularly to 
remove the expired
+ * mob files.
+ */
+@InterfaceAudience.Private
+public class MobFileCleanerChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCleanerChore.class);
+  private final HMaster master;
+  private ExpiredMobFileCleaner cleaner;
+
+  public MobFileCleanerChore(HMaster master) {
+super(master.getServerName() + "-ExpiredMobFileCleanerChore", master, 
master.getConfiguration()
+  .getInt(MobConstants.MOB_CLEANER_PERIOD, 
MobConstants.DEFAULT_MOB_CLEANER_PERIOD), master
+  .getConfiguration().getInt(MobConstants.MOB_CLEANER_PERIOD,
+MobConstants.DEFAULT_MOB_CLEANER_PERIOD), TimeUnit.SECONDS);
+this.master = master;
+cleaner = new ExpiredMobFileCleaner();
+cleaner.setConf(master.getConfiguration());
+  }
+
+  @VisibleForTesting
+  public MobFileCleanerChore() {
+this.master = null;
+  }
+  
+  @Override
+  
@edu.umd.cs.findbugs.annotations.SuppressWarnings(value="REC_CATCH_EXCEPTION",
+justification="Intentional")
+
+  protected void chore() {
+try {
+
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  if (hcd.isMobEnabled() && hcd.getMinVersions() == 0) {
+// clean only for mob-enabled column.
+// obtain a read table lock before cleaning, synchronize with 
MobFileCompactionChore.
+final LockManager.MasterLock lock = 
master.getLockManager().createMasterLock(
+MobUtils.getTableLockName(htd.getTableName()), LockType.SHARED,
+this.getClass().getSimpleName() + ": Cleaning expired mob 
files");
+try {
+  lock.acquire();
+  
cleaner.cleanExpiredMobFiles(htd.getTableName().getNameAsString(), hcd);
+} finally {
+  lock.release();
+}
+  }
+}
+// Now clean obsolete files for a table
+

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336201383
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/FaultyMobStoreCompactor.java
 ##
 @@ -0,0 +1,355 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.mob;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.Random;
+import java.util.concurrent.atomic.AtomicLong;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.PrivateCellUtil;
+import org.apache.hadoop.hbase.io.hfile.CorruptHFileException;
+import org.apache.hadoop.hbase.regionserver.CellSink;
+import org.apache.hadoop.hbase.regionserver.HStore;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.KeyValueScanner;
+import org.apache.hadoop.hbase.regionserver.ScannerContext;
+import org.apache.hadoop.hbase.regionserver.ShipperListener;
+import org.apache.hadoop.hbase.regionserver.StoreFileWriter;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputControlUtil;
+import org.apache.hadoop.hbase.regionserver.throttle.ThroughputController;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class FaultyMobStoreCompactor extends DefaultMobStoreCompactor
+{
+
+  public static AtomicLong mobCounter = new AtomicLong();
+  public static AtomicLong totalFailures = new AtomicLong();
+  public static AtomicLong totalCompactions = new AtomicLong();
+  public static AtomicLong totalMajorCompactions = new AtomicLong();
+
+  static double failureProb = 0.1d;
+  static Random rnd = new Random();
+
+
+  public FaultyMobStoreCompactor(Configuration conf, HStore store) {
+super(conf, store);
+failureProb = conf.getDouble("injected.fault.probability", 0.1);
+  }
+
+  @Override
+  protected boolean performCompaction(FileDetails fd, InternalScanner scanner, 
CellSink writer,
 
 Review comment:
   Yes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336199730
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MobFileCompactionChore.java
 ##
 @@ -0,0 +1,179 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.ScheduledChore;
+import org.apache.hadoop.hbase.TableDescriptors;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.CompactionState;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.client.TableState;
+import org.apache.hadoop.hbase.mob.MobConstants;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class MobFileCompactionChore extends ScheduledChore {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(MobFileCompactionChore.class);
+  private final Configuration conf;
+  private final HMaster master;
+  private volatile boolean running = false;
+  private int regionBatchSize = 0;// not set - compact all
+
+  public MobFileCompactionChore(HMaster master) {
+super(master.getServerName() + "-MobFileCompactionChore", master, 
master.getConfiguration()
+  .getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD), master
+  .getConfiguration().getInt(MobConstants.MOB_COMPACTION_CHORE_PERIOD,
+MobConstants.DEFAULT_MOB_COMPACTION_CHORE_PERIOD), TimeUnit.SECONDS);
+this.master = master;
+this.conf = master.getConfiguration();
+this.regionBatchSize =
+
master.getConfiguration().getInt(MobConstants.MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE,
+  MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE);
+
+  }
+
+  @Override
+  protected void chore() {
+
+boolean reported = false;
+
+try (Connection conn = ConnectionFactory.createConnection(conf);
+ Admin admin = conn.getAdmin(); ) {
+
+  if (running) {
+LOG.warn(getName() +" is running already, skipping this attempt.");
+return;
+  }
+  running = true;
+  TableDescriptors htds = master.getTableDescriptors();
+  Map map = htds.getAll();
+  for (TableDescriptor htd : map.values()) {
+if (!master.getTableStateManager().isTableState(htd.getTableName(),
+  TableState.State.ENABLED)) {
+  continue;
+}
+for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
+  if (hcd.isMobEnabled()) {
+if (!reported) {
+  master.reportMobCompactionStart(htd.getTableName());
+  reported = true;
+}
+LOG.info(" Major compacting "+ htd.getTableName() + " cf=" + 
hcd.getNameAsString());
+if (regionBatchSize == 
MobConstants.DEFAULT_MOB_MAJOR_COMPACTION_REGION_BATCH_SIZE) {
+  admin.majorCompact(htd.getTableName(), hcd.getName());
+} else {
+  performMajorCompactionInBatches(admin, htd, hcd);
+}
+  }
+}
+if (reported) {
+  master.reportMobCompactionEnd(htd.getTableName());
+  reported = false;
+}
+  }
+} catch (Exception e) {
+  LOG.error("Failed to compact", e);
+} finally {
+  running = false;
+}
+  }
+
+  private void performMajorCompactionInBatches(Admin admin, TableDescriptor 
htd,
+  ColumnFamilyDescriptor hcd) throws IOException {
+
+List regions = admin.getRegions(htd.getTableName());
+if (regions.size() <= this.regionBatchSize) 

[GitHub] [hbase] VladRodionov commented on a change in pull request #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
VladRodionov commented on a change in pull request #623: HBASE-22749: 
Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#discussion_r336192416
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
 ##
 @@ -1744,10 +1742,13 @@ public CompactRegionResponse compactRegion(final 
RpcController controller,
   master.checkInitialized();
   byte[] regionName = request.getRegion().getValue().toByteArray();
   TableName tableName = RegionInfo.getTable(regionName);
+  // TODO: support CompactType.MOB
   // if the region is a mob region, do the mob file compaction.
   if (MobUtils.isMobRegionName(tableName, regionName)) {
 checkHFileFormatVersionForMob();
-return compactMob(request, tableName);
+//return compactMob(request, tableName);
+//TODO: support CompactType.MOB
 
 Review comment:
   Fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22749) Distributed MOB compactions

2019-10-17 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953982#comment-16953982
 ] 

HBase QA commented on HBASE-22749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} https://github.com/apache/hbase/pull/623 does not apply to 
master. Rebase required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hbase/pull/623 |
| JIRA Issue | HBASE-22749 |
| Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |


This message was automatically generated.



> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBase-MOB-2.0-v1.pdf, HBase-MOB-2.0-v2.1.pdf, HBase-MOB-2.0-v2.2.pdf, 
> HBase-MOB-2.0-v2.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #623: HBASE-22749: Distributed MOB compactions

2019-10-17 Thread GitBox
Apache-HBase commented on issue #623: HBASE-22749: Distributed MOB compactions
URL: https://github.com/apache/hbase/pull/623#issuecomment-543294358
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m  0s |  Docker mode activated.  |
   | :broken_heart: |  patch  |   0m  6s |  
https://github.com/apache/hbase/pull/623 does not apply to master. Rebase 
required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hbase/pull/623 |
   | JIRA Issue | HBASE-22749 |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-623/3/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on issue #661: HBASE-15519 Add per-user metrics with lossy counting

2019-10-17 Thread GitBox
busbey commented on issue #661: HBASE-15519 Add per-user metrics with lossy 
counting
URL: https://github.com/apache/hbase/pull/661#issuecomment-543289013
 
 
   one quick note, that's a single YCSB run on each side right? that will share 
the Connection across the threads in a single JVM, so there won't be much in 
way of concurrent users to stress the lossy accounting.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22592) HMaster Construction failure log is lost

2019-10-17 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HBASE-22592:
-
Fix Version/s: (was: 3.0.0)

> HMaster Construction failure log is lost
> 
>
> Key: HBASE-22592
> URL: https://issues.apache.org/jira/browse/HBASE-22592
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.3.1, 1.3.5
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: image-2019-06-16-15-06-40-109.png
>
>
> If HMaster Construction fails at RunTime due to some exception similar to 
> NoSuchMethodError, HMaster failure Exception Stacktrace is lost if Logger 
> class is not loaded so far.
> Sample Exception which is not available in log/out files:
> !image-2019-06-16-15-06-40-109.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] busbey commented on issue #661: HBASE-15519 Add per-user metrics with lossy counting

2019-10-17 Thread GitBox
busbey commented on issue #661: HBASE-15519 Add per-user metrics with lossy 
counting
URL: https://github.com/apache/hbase/pull/661#issuecomment-543287854
 
 
   sorry, I can't seem to get enough time around for reviewing this in detail. 
if y'all are good then I'm good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23181) Blocked WAL archive: "LogRoller: Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23181:
--
Description: 
On a heavily loaded cluster, WAL count keeps rising and we can get into a state 
where we are not rolling the logs off fast enough. In particular, there is this 
interesting state at the extreme where we pick a region to flush because 'Too 
many WALs' but the region is actually not online. As the WAL count rises, we 
keep picking a region-to-flush that is no longer on the server. This condition 
blocks our being able to clear WALs; eventually WALs climb into the hundreds 
and the RS goes zombie with a full Call queue that starts throwing 
CallQueueTooLargeExceptions (bad if this servers is the one carrying 
hbase:meta): i.e. clients fail to access the RegionServer.

One symptom is a fast spike in WAL count for the RS. A restart of the RS will 
break the bind.

Here is how it looks in the log:

{code}
# Here is region closing
2019-10-16 23:10:55,897 INFO 
org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 
8ee433ad59526778c53cc85ed3762d0b



# Then soon after ...
2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not 
online on us
2019-10-16 23:11:45,006 INFO 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
count=45, max=32; forcing flush of 1 regions(s): 
8ee433ad59526778c53cc85ed3762d0b

...
# Later...

2019-10-16 23:20:25,427 INFO 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
count=542, max=32; forcing flush of 1 regions(s): 
8ee433ad59526778c53cc85ed3762d0b
2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not 
online on us
{code}


I've seen this runaway WALs in old 1.2.x hbase and this exception is from 2.2.1.

  was:
On a heavily loaded cluster, WAL count keeps rising and we can get into a state 
where we are not rolling the logs off fast enough. In particular, there is this 
interesting state at the extreme where we pick a region to flush because 'Too 
many WALs' but the region is actually not online. As the WAL count rises, we 
keep picking a region-to-flush that is no longer on the server. This condition 
blocks our being able to clear WALs; eventually WALs climb into the hundreds 
and the RS goes zombie with a full Call queue that starts throwing 
CallQueueTooLargeExceptions (bad if this servers is the one carrying 
hbase:meta).

Here is how it looks in the log:

{code}
# Here is region closing
2019-10-16 23:10:55,897 INFO 
org.apache.hadoop.hbase.regionserver.handler.UnassignRegionHandler: Closed 
8ee433ad59526778c53cc85ed3762d0b



# Then soon after ...
2019-10-16 23:11:44,041 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not 
online on us
2019-10-16 23:11:45,006 INFO 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
count=45, max=32; forcing flush of 1 regions(s): 
8ee433ad59526778c53cc85ed3762d0b

...
# Later...

2019-10-16 23:20:25,427 INFO 
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL: Too many WALs; 
count=542, max=32; forcing flush of 1 regions(s): 
8ee433ad59526778c53cc85ed3762d0b
2019-10-16 23:20:25,427 WARN org.apache.hadoop.hbase.regionserver.LogRoller: 
Failed to schedule flush of 8ee433ad59526778c53cc85ed3762d0b, because it is not 
online on us
{code}


I've seen this runaway WALs in old 1.2.x hbase and this exception is from 2.2.1.


> Blocked WAL archive: "LogRoller: Failed to schedule flush of 
> 8ee433ad59526778c53cc85ed3762d0b, because it is not online on us"
> --
>
> Key: HBASE-23181
> URL: https://issues.apache.org/jira/browse/HBASE-23181
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Priority: Major
>
> On a heavily loaded cluster, WAL count keeps rising and we can get into a 
> state where we are not rolling the logs off fast enough. In particular, there 
> is this interesting state at the extreme where we pick a region to flush 
> because 'Too many WALs' but the region is actually not online. As the WAL 
> count rises, we keep picking a region-to-flush that is no longer on the 
> server. This condition blocks our being able to clear WALs; eventually WALs 
> climb into the hundreds and the RS goes zombie with a full Call queue that 
> starts throwing CallQueueTooLargeExceptions (bad if this servers is the one 
> carrying hbase:meta): i.e. clients fail to access the RegionServer.
> One symptom is a fast spike in WAL count for the RS. A 

[GitHub] [hbase] Apache-HBase commented on issue #729: HBASE-22739 ArrayIndexOutOfBoundsException when balance

2019-10-17 Thread GitBox
Apache-HBase commented on issue #729: HBASE-22739 
ArrayIndexOutOfBoundsException when balance
URL: https://github.com/apache/hbase/pull/729#issuecomment-543274175
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 29s |  master passed  |
   | :green_heart: |  compile  |   0m 56s |  master passed  |
   | :green_heart: |  checkstyle  |   1m 23s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 51s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   0m 42s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m 11s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   4m  8s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m  7s |  the patch passed  |
   | :green_heart: |  compile  |   1m  0s |  the patch passed  |
   | :green_heart: |  javac  |   1m  0s |  the patch passed  |
   | :green_heart: |  checkstyle  |   1m 21s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 43s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  16m 23s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   | :green_heart: |  findbugs  |   4m 21s |  the patch passed  |
   ||| _ Other Tests _ |
   | :broken_heart: |  unit  | 273m  9s |  hbase-server in the patch failed.  |
   | :green_heart: |  asflicense  |   0m 35s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 333m 59s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-729/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/729 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 2bfe47a102f9 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-729/out/precommit/personality/provided.sh
 |
   | git revision | master / 0043dfebc5 |
   | Default Java | 1.8.0_181 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-729/1/artifact/out/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-729/1/testReport/
 |
   | Max. process+thread count | 4881 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-729/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2019-10-17 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953842#comment-16953842
 ] 

Michael Stack commented on HBASE-19663:
---

+1 on patch. Maybe change comment on commit to mention this issue since it 
notes how the include happens?

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23177:
--
Status: In Progress  (was: Patch Available)

> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23177:
--
Status: Patch Available  (was: In Progress)

> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22370) ByteBuf LEAK ERROR

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953806#comment-16953806
 ] 

Hudson commented on HBASE-22370:


Results for branch master
[build #1508 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1508/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ByteBuf LEAK ERROR
> --
>
> Key: HBASE-22370
> URL: https://issues.apache.org/jira/browse/HBASE-22370
> Project: HBase
>  Issue Type: Bug
>  Components: rpc, wal
>Affects Versions: 2.2.1
>Reporter: Lijin Bin
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.2.2, 2.1.8
>
> Attachments: HBASE-22370-master-v1.patch
>
>
> We do failover test and throw a leak error, this is hard to reproduce.
> {code}
> 2019-05-06 02:30:27,781 ERROR [AsyncFSWAL-0] util.ResourceLeakDetector: LEAK: 
> ByteBuf.release() was not called before it's garbage-collected. See 
> http://netty.io/wiki/reference-counted-objects.html for more information.
> Recent access records:
> Created at:
>  
> org.apache.hbase.thirdparty.io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:334)
>  
> org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:187)
>  
> org.apache.hbase.thirdparty.io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:178)
>  
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush0(FanOutOneBlockAsyncDFSOutput.java:494)
>  
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput.flush(FanOutOneBlockAsyncDFSOutput.java:513)
>  
> org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.sync(AsyncProtobufLogWriter.java:144)
>  org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.sync(AsyncFSWAL.java:353)
>  
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.consume(AsyncFSWAL.java:536)
>  
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
> If FanOutOneBlockAsyncDFSOutput#endBlock throw Exception before call 
> "buf.release();", this buf has not chance to release.
> In CallRunner if the call skipped or Dropping timed out call, the call do not 
> call cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953807#comment-16953807
 ] 

Hudson commented on HBASE-23177:


Results for branch master
[build #1508 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1508/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23107) Avoid temp byte array creation when doing cacheDataOnWrite

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953808#comment-16953808
 ] 

Hudson commented on HBASE-23107:


Results for branch master
[build #1508 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1508/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1508//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Avoid temp byte array creation when doing cacheDataOnWrite
> --
>
> Key: HBASE-23107
> URL: https://issues.apache.org/jira/browse/HBASE-23107
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache, HFile
>Reporter: chenxu
>Assignee: chenxu
>Priority: Major
>  Labels: gc
> Fix For: 3.0.0, 2.3.0
>
> Attachments: flamegraph_after.svg, flamegraph_before.svg
>
>
> code in HFileBlock.Writer.cloneUncompressedBufferWithHeader
> {code:java}
> ByteBuffer cloneUncompressedBufferWithHeader() {
>   expectState(State.BLOCK_READY);
>   byte[] uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   …
> }
> {code}
> When cacheOnWrite feature enabled, a temp byte array was created in order to 
> copy block’s data, we can avoid this by use of ByteBuffAllocator. This can 
> improve GC performance in write heavy scenarios.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23176) delete_all_snapshot does not work with regex

2019-10-17 Thread Guangxu Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng resolved HBASE-23176.
---
Fix Version/s: 3.0.0
   Resolution: Fixed

Pushed to master. Thanks [~kpalanisamy] for contributing.

> delete_all_snapshot does not work with regex
> 
>
> Key: HBASE-23176
> URL: https://issues.apache.org/jira/browse/HBASE-23176
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 3.0.0
>
>
> Delete_all_snapshot.rb is using deprecated method 
> SnapshotDescription#getTable but this method is already removed in 3.0.x.
> {code:java}
> hbase(main):022:0>delete_all_snapshot("t10.*")
> SNAPSHOT TABLE + CREATION 
> TIME ERROR: undefined method `getTable' for 
> #
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] guangxuCheng merged pull request #725: HBASE-23176 delete_all_snapshot does not work with regex

2019-10-17 Thread GitBox
guangxuCheng merged pull request #725: HBASE-23176 delete_all_snapshot does not 
work with regex
URL: https://github.com/apache/hbase/pull/725
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] binlijin opened a new pull request #729: HBASE-22739 ArrayIndexOutOfBoundsException when balance

2019-10-17 Thread GitBox
binlijin opened a new pull request #729: HBASE-22739 
ArrayIndexOutOfBoundsException when balance
URL: https://github.com/apache/hbase/pull/729
 
 
   When multiple ServerNames having same hostname and port, the Cluster build 
wrong with regionPerServerIndex. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (HBASE-22739) ArrayIndexOutOfBoundsException when balance

2019-10-17 Thread Lijin Bin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijin Bin reassigned HBASE-22739:
-

Assignee: Lijin Bin

> ArrayIndexOutOfBoundsException when balance
> ---
>
> Key: HBASE-22739
> URL: https://issues.apache.org/jira/browse/HBASE-22739
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Reporter: casuallc
>Assignee: Lijin Bin
>Priority: Major
> Fix For: 2.1.1
>
>
>  
> {code:java}
> 2019-07-25 15:19:59,828 ERROR [master/nna:16000.Chore.1] 
> hbase.ScheduledChore: Caught error
> java.lang.ArrayIndexOutOfBoundsException: 3171
> at 
> org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.removeRegion(BaseLoadBalancer.java:873)
> at 
> org.apache.hadoop.hbase.master.balancer.BaseLoadBalancer$Cluster.doAction(BaseLoadBalancer.java:716)
> at 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer.balanceCluster(StochasticLoadBalancer.java:407)
> at 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer.balanceCluster(StochasticLoadBalancer.java:318)
> at org.apache.hadoop.hbase.master.HMaster.balance(HMaster.java:1650)
> at org.apache.hadoop.hbase.master.HMaster.balance(HMaster.java:1567)
> at 
> org.apache.hadoop.hbase.master.balancer.BalancerChore.chore(BalancerChore.java:49)
> at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> org.apache.hadoop.hbase.JitterScheduledThreadPoolExecutorImpl$JitteredRunnableScheduledFuture.run(JitterScheduledThreadPoolExecutorImpl.java:111)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> should check if the regionIndex is valid when removeRegion,
> java: 
> hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
> {code:java}
> int[] removeRegion(int[] regions, int regionIndex) {
> //TODO: this maybe costly. Consider using linked lists
> int[] newRegions = new int[regions.length - 1];
> int i = 0;
> for (i = 0; i < regions.length; i++) {
> if (regions[i] == regionIndex) {
> break;
> }
> if (i == regions.length - 1) {
> return Arrays.copyOf(regions, regions.length);
> }
> newRegions[i] = regions[i];
> }
> System.arraycopy(regions, i+1, newRegions, i, newRegions.length - i);
> return newRegions;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23055) Alter hbase:meta

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953636#comment-16953636
 ] 

Hudson commented on HBASE-23055:


Results for branch HBASE-23055
[build #17 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/17/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/17//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/17//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-23055/17//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Alter hbase:meta
> 
>
> Key: HBASE-23055
> URL: https://issues.apache.org/jira/browse/HBASE-23055
> Project: HBase
>  Issue Type: Task
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0
>
>
> hbase:meta is currently hardcoded. Its schema cannot be change.
> This issue is about allowing edits to hbase:meta schema. It will allow our 
> being able to set encodings such as the block-with-indexes which will help 
> quell CPU usage on host carrying hbase:meta. A dynamic hbase:meta is first 
> step on road to being able to split meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2019-10-17 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953627#comment-16953627
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #151 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/151/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/151//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/151//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/151//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/151//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-17 Thread GitBox
Apache-HBase commented on issue #721: HBASE-23170 Admin#getRegionServers use 
ClusterMetrics.Option.SERVERS_NAME
URL: https://github.com/apache/hbase/pull/721#issuecomment-543118391
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 6 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 36s |  Maven dependency ordering for branch 
 |
   | :green_heart: |  mvninstall  |   5m 16s |  master passed  |
   | :green_heart: |  compile  |   1m 52s |  master passed  |
   | :green_heart: |  checkstyle  |   2m 16s |  master passed  |
   | :green_heart: |  shadedjars  |   4m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  javadoc  |   1m 15s |  master passed  |
   | :blue_heart: |  spotbugs  |   4m  3s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | :green_heart: |  findbugs  |   5m  8s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :blue_heart: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  
|
   | :green_heart: |  mvninstall  |   5m  1s |  the patch passed  |
   | :green_heart: |  compile  |   1m 57s |  the patch passed  |
   | :green_heart: |  javac  |   1m 57s |  the patch passed  |
   | :green_heart: |  checkstyle  |   2m 14s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  shadedjars  |   4m 44s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | :green_heart: |  hadoopcheck  |  15m 47s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | :green_heart: |  javadoc  |   1m 14s |  the patch passed  |
   | :green_heart: |  findbugs  |   5m 20s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |   1m 53s |  hbase-client in the patch passed.  |
   | :green_heart: |  unit  | 160m 50s |  hbase-server in the patch passed.  |
   | :green_heart: |  unit  |   1m 12s |  hbase-it in the patch passed.  |
   | :green_heart: |  asflicense  |   1m 23s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 230m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-721/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/721 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux f35c4b971bc6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-721/out/precommit/personality/provided.sh
 |
   | git revision | master / 0043dfebc5 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-721/4/testReport/
 |
   | Max. process+thread count | 4709 (vs. ulimit of 1) |
   | modules | C: hbase-client hbase-server hbase-it U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-721/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #728: HBASE-23042 Parameters are incorrect in procedures jsp

2019-10-17 Thread GitBox
Apache-HBase commented on issue #728: HBASE-23042 Parameters are incorrect in 
procedures jsp
URL: https://github.com/apache/hbase/pull/728#issuecomment-543115778
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :yellow_heart: |  test4tests  |   0m  0s |  The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 55s |  master passed  |
   | :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 24s |  the patch passed  |
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  | 157m 22s |  hbase-server in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 27s |  The patch does not generate ASF 
License warnings.  |
   |  |   | 172m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-728/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/728 |
   | Optional Tests | dupname asflicense javac javadoc unit |
   | uname | Linux b7b10360d7f5 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-728/out/precommit/personality/provided.sh
 |
   | git revision | master / 0043dfebc5 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-728/1/testReport/
 |
   | Max. process+thread count | 4340 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-728/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23066) Allow cache on write during compactions when prefetching is enabled

2019-10-17 Thread ramkrishna.s.vasudevan (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953614#comment-16953614
 ] 

ramkrishna.s.vasudevan commented on HBASE-23066:


[~busbey] - you want to have a look at the patch and the charts added by 
[~jacob.leblanc].

> Allow cache on write during compactions when prefetching is enabled
> ---
>
> Key: HBASE-23066
> URL: https://issues.apache.org/jira/browse/HBASE-23066
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Affects Versions: 1.4.10
>Reporter: Jacob LeBlanc
>Assignee: Jacob LeBlanc
>Priority: Minor
> Fix For: 2.3.0, 1.6.0
>
> Attachments: HBASE-23066.patch, performance_results.png, 
> prefetchCompactedBlocksOnWrite.patch
>
>
> In cases where users care a lot about read performance for tables that are 
> small enough to fit into a cache (or the cache is large enough), 
> prefetchOnOpen can be enabled to make the entire table available in cache 
> after the initial region opening is completed. Any new data can also be 
> guaranteed to be in cache with the cacheBlocksOnWrite setting.
> However, the missing piece is when all blocks are evicted after a compaction. 
> We found very poor performance after compactions for tables under heavy read 
> load and a slower backing filesystem (S3). After a compaction the prefetching 
> threads need to compete with threads servicing read requests and get 
> constantly blocked as a result. 
> This is a proposal to introduce a new cache configuration option that would 
> cache blocks on write during compaction for any column family that has 
> prefetch enabled. This would virtually guarantee all blocks are kept in cache 
> after the initial prefetch on open is completed allowing for guaranteed 
> steady read performance despite a slow backing file system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23180) Create a nightly build to verify hbck2

2019-10-17 Thread Sakthi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953528#comment-16953528
 ] 

Sakthi commented on HBASE-23180:


Looks like I don't have enough permission to create a new job out of a 
jenkinsfile.

> Create a nightly build to verify hbck2
> --
>
> Key: HBASE-23180
> URL: https://issues.apache.org/jira/browse/HBASE-23180
> Project: HBase
>  Issue Type: Task
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: hbck2
> Fix For: hbase-operator-tools-1.1.0
>
>
> Quoting myself from the discussion thread from the dev mailing list "*How do 
> we test hbck2?*" -
> "Planning to start working on a nightly build that can spin up a 
> mini-cluster, load some data into it, do some actions to bring the cluster 
> into an undesirable state that hbck2 can fix and then invoke the hbck2 to see 
> if things work well.
>  
> Plan is to start small with one of the hbck2 commands and remaining ones can 
> be added incrementally. As of now I would like to start with making sure the 
> job uses one of the hbase versions (probably 2.1.x/2.2.x), we can discuss 
> about the need to run the job against all the present hbase versions/taking 
> in a bunch of hbase versions as input and running against them/or just a 
> single version.
>  
> The job script would be located in our operator-tools repo."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HBASE-23180) Create a nightly build to verify hbck2

2019-10-17 Thread Sakthi (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-23180 started by Sakthi.
--
> Create a nightly build to verify hbck2
> --
>
> Key: HBASE-23180
> URL: https://issues.apache.org/jira/browse/HBASE-23180
> Project: HBase
>  Issue Type: Task
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
>  Labels: hbck2
> Fix For: hbase-operator-tools-1.1.0
>
>
> Quoting myself from the discussion thread from the dev mailing list "*How do 
> we test hbck2?*" -
> "Planning to start working on a nightly build that can spin up a 
> mini-cluster, load some data into it, do some actions to bring the cluster 
> into an undesirable state that hbck2 can fix and then invoke the hbck2 to see 
> if things work well.
>  
> Plan is to start small with one of the hbck2 commands and remaining ones can 
> be added incrementally. As of now I would like to start with making sure the 
> job uses one of the hbase versions (probably 2.1.x/2.2.x), we can discuss 
> about the need to run the job against all the present hbase versions/taking 
> in a bunch of hbase versions as input and running against them/or just a 
> single version.
>  
> The job script would be located in our operator-tools repo."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex

2019-10-17 Thread GitBox
Apache-HBase commented on issue #725: HBASE-23176 delete_all_snapshot does not 
work with regex
URL: https://github.com/apache/hbase/pull/725#issuecomment-543057521
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | :blue_heart: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 1 
new or modified test files.  |
   ||| _ master Compile Tests _ |
   | :green_heart: |  mvninstall  |   6m 52s |  master passed  |
   | :green_heart: |  javadoc  |   0m 14s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | :green_heart: |  mvninstall  |   5m 29s |  the patch passed  |
   | :green_heart: |  rubocop  |   0m 10s |  There were no new rubocop issues.  
|
   | :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | :green_heart: |  javadoc  |   0m 13s |  the patch passed  |
   ||| _ Other Tests _ |
   | :green_heart: |  unit  |  10m 17s |  hbase-shell in the patch passed.  |
   | :green_heart: |  asflicense  |   0m 13s |  The patch does not generate ASF 
License warnings.  |
   |  |   |  25m 35s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-725/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/725 |
   | Optional Tests | dupname asflicense javac javadoc unit rubocop |
   | uname | Linux f34471f58eee 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-725/out/precommit/personality/provided.sh
 |
   | git revision | master / 0043dfebc5 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-725/4/testReport/
 |
   | Max. process+thread count | 2621 (vs. ulimit of 1) |
   | modules | C: hbase-shell U: hbase-shell |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-725/4/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) rubocop=0.75.1 |
   | Powered by | Apache Yetus 0.11.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23176) delete_all_snapshot does not work with regex

2019-10-17 Thread Guangxu Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953479#comment-16953479
 ] 

Guangxu Cheng commented on HBASE-23176:
---

LGTM +1

> delete_all_snapshot does not work with regex
> 
>
> Key: HBASE-23176
> URL: https://issues.apache.org/jira/browse/HBASE-23176
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Major
>
> Delete_all_snapshot.rb is using deprecated method 
> SnapshotDescription#getTable but this method is already removed in 3.0.x.
> {code:java}
> hbase(main):022:0>delete_all_snapshot("t10.*")
> SNAPSHOT TABLE + CREATION 
> TIME ERROR: undefined method `getTable' for 
> #
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HBASE-23183) Patch based precommit job fails

2019-10-17 Thread Peter Somogyi (Jira)
Peter Somogyi created HBASE-23183:
-

 Summary: Patch based precommit job fails
 Key: HBASE-23183
 URL: https://issues.apache.org/jira/browse/HBASE-23183
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 1.4.11
Reporter: Peter Somogyi


PreCommit-HBASE-Build job fails on branch-1.4 with missing JAVA_HOME.
{noformat}
WARNING: JAVA_HOME not defined. Disabling java tests. {noformat}
[https://builds.apache.org/view/H-L/view/HBase/job/PreCommit-HBASE-Build/962/console]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2019-10-17 Thread Peter Somogyi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953477#comment-16953477
 ] 

Peter Somogyi commented on HBASE-19663:
---

+1, with the patch site build is successful.

Precommit job for this patch failed with an error, I'll open a Jira.
{noformat}
WARNING: JAVA_HOME not defined. Disabling java tests. {noformat}

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] mymeiyi opened a new pull request #728: HBASE-23042 Parameters are incorrect in procedures jsp

2019-10-17 Thread GitBox
mymeiyi opened a new pull request #728: HBASE-23042 Parameters are incorrect in 
procedures jsp
URL: https://github.com/apache/hbase/pull/728
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] karthikhw commented on issue #725: HBASE-23176 delete_all_snapshot does not work with regex

2019-10-17 Thread GitBox
karthikhw commented on issue #725: HBASE-23176 delete_all_snapshot does not 
work with regex
URL: https://github.com/apache/hbase/pull/725#issuecomment-543047071
 
 
   Thank you @guangxuCheng highlighting missing one. Sorry, I somehow missed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23042) Parameters are incorrect in procedures jsp

2019-10-17 Thread Yi Mei (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953458#comment-16953458
 ] 

Yi Mei commented on HBASE-23042:


The cause is that in 
org.apache.hbase.thirdparty.com.google.protobuf.util.JsonFormat:
{code:java}
case BYTES:
  generator.print("\"");
  generator.print(BaseEncoding.base64().encode(((ByteString) 
value).toByteArray()));
  generator.print("\"");
  break;
{code}

> Parameters are incorrect in procedures jsp
> --
>
> Key: HBASE-23042
> URL: https://issues.apache.org/jira/browse/HBASE-23042
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: 1.png
>
>
> In procedures jps, the parameters of table name, region start end keys are 
> wrong, please see the first picture.
> This is because all bytes params are encoded in base64. It is confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-23042) Parameters are incorrect in procedures jsp

2019-10-17 Thread Yi Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei reassigned HBASE-23042:
--

Assignee: Yi Mei

> Parameters are incorrect in procedures jsp
> --
>
> Key: HBASE-23042
> URL: https://issues.apache.org/jira/browse/HBASE-23042
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Attachments: 1.png
>
>
> In procedures jps, the parameters of table name, region start end keys are 
> wrong, please see the first picture.
> This is because all bytes params are encoded in base64. It is confusing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] mymeiyi commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-17 Thread GitBox
mymeiyi commented on a change in pull request #721: HBASE-23170 
Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
URL: https://github.com/apache/hbase/pull/721#discussion_r335835605
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
 ##
 @@ -1048,8 +1048,7 @@
* @return current live region servers list wrapped by {@link 
CompletableFuture}
*/
   default CompletableFuture> getRegionServers() {
-return getClusterMetrics(EnumSet.of(Option.LIVE_SERVERS))
-  .thenApply(cm -> cm.getLiveServerMetrics().keySet());
+return getClusterMetrics(EnumSet.of(Option.SERVERS_NAME)).thenApply(cm -> 
cm.getServersName());
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] mymeiyi commented on a change in pull request #721: HBASE-23170 Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME

2019-10-17 Thread GitBox
mymeiyi commented on a change in pull request #721: HBASE-23170 
Admin#getRegionServers use ClusterMetrics.Option.SERVERS_NAME
URL: https://github.com/apache/hbase/pull/721#discussion_r335835232
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionPlacementMaintainer.java
 ##
 @@ -204,9 +202,7 @@ private void genAssignmentPlan(TableName tableName,
 
 // Get the all the region servers
 List servers = new ArrayList<>();
-servers.addAll(
-  
FutureUtils.get(getConnection().getAdmin().getClusterMetrics(EnumSet.of(Option.LIVE_SERVERS)))
-.getLiveServerMetrics().keySet());
+
servers.addAll(FutureUtils.get(getConnection().getAdmin().getRegionServers()));
 
 Review comment:
   It's a async admin and does not need to close.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2019-10-17 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953433#comment-16953433
 ] 

Sean Busbey commented on HBASE-19663:
-

that's got it now. pretty good sign I should call it a night. :)

no problem building branch-1.3 without the patch. still waiting on master build 
w/o patch.

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2019-10-17 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19663:

Attachment: HBASE-19663-branch-1.4.v0.patch

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: HBASE-19663-branch-1.4.v0.patch, script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2019-10-17 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953429#comment-16953429
 ] 

Sean Busbey commented on HBASE-19663:
-

Huh. Thought so. Lemme go switch off mobile and upload again.

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-19663) site build fails complaining "javadoc: error - class file for javax.annotation.meta.TypeQualifierNickname not found"

2019-10-17 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-19663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953428#comment-16953428
 ] 

Michael Stack commented on HBASE-19663:
---

Did you post a patch [~busbey]?

> site build fails complaining "javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found"
> 
>
> Key: HBASE-19663
> URL: https://issues.apache.org/jira/browse/HBASE-19663
> Project: HBase
>  Issue Type: Bug
>  Components: documentation, website
>Reporter: Michael Stack
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 1.4.11
>
> Attachments: script.sh
>
>
> Cryptic failure trying to build beta-1 RC. Fails like this:
> {code}
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 03:54 min
> [INFO] Finished at: 2017-12-29T01:13:15-08:00
> [INFO] Final Memory: 381M/9165M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: Error generating maven-javadoc-plugin:2.10.3:aggregate:
> [ERROR] Exit code: 1 - warning: unknown enum constant When.ALWAYS
> [ERROR] reason: class file for javax.annotation.meta.When not found
> [ERROR] warning: unknown enum constant When.UNKNOWN
> [ERROR] warning: unknown enum constant When.MAYBE
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: malformed: "#matchingRows(Cell, byte[]))"
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] 
> /home/stack/hbase.git/hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java:762:
>  warning - Tag @link: reference not found: #matchingRows(Cell, byte[]))
> [ERROR] javadoc: warning - Class javax.annotation.Nonnull not found.
> [ERROR] javadoc: error - class file for 
> javax.annotation.meta.TypeQualifierNickname not found
> [ERROR]
> [ERROR] Command line was: /home/stack/bin/jdk1.8.0_151/jre/../bin/javadoc 
> -J-Xmx2G @options @packages
> [ERROR]
> [ERROR] Refer to the generated Javadoc files in 
> '/home/stack/hbase.git/target/site/apidocs' dir.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
> javax.annotation.meta.TypeQualifierNickname is out of jsr305 but we don't 
> include this anywhere according to mvn dependency.
> Happens building the User API both test and main.
> Excluding these lines gets us passing again:
> {code}
>   3511   
>   3512 
> org.apache.yetus.audience.tools.IncludePublicAnnotationsStandardDoclet
>   3513   
>   3514   
>   3515 org.apache.yetus
>   3516 audience-annotations
>   3517 ${audience-annotations.version}
>   3518   
> + 3519   true
> {code}
> Tried upgrading to newer mvn site (ours is three years old) but that a 
> different set of problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23177:
--
Status: Patch Available  (was: Reopened)

branch-1.001 is branch-2 patch but w/o test (Pain bring back test changes so 
left them off).

> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23177) If fail to open reference because FNFE, make it plain it is a Reference

2019-10-17 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-23177:
--
Attachment: HBASE-23177.branch-1.001.patch

> If fail to open reference because FNFE, make it plain it is a Reference
> ---
>
> Key: HBASE-23177
> URL: https://issues.apache.org/jira/browse/HBASE-23177
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.1.8, 2.2.3
>
> Attachments: 
> 0001-HBASE-23177-If-fail-to-open-reference-because-FNFE-m.patch, 
> HBASE-23177.branch-1.001.patch
>
>
> If root file for a Reference is missing, takes a while to figure it. 
> Master-side says failed open of Region. RegionServer side it talks about FNFE 
> for some random file. Better, dump out Reference data. Helps figuring what 
> has gone wrong. Otherwise its confusing hard to tie the FNFE to root cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)