[jira] [Commented] (HBASE-21029) Miscount of memstore's heap/offheap size if same cell was put

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574421#comment-16574421
 ] 

Hadoop QA commented on HBASE-21029:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
24s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}183m 59s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}225m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestAsyncTableGetMultiThreaded |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-21029 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934910/HBASE-21029.branch-2.0.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 378e6b9663f5 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.0 / 7ee4aa459c |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13989/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13989/testReport/ |
| Max. process+thread count | 4766 (vs. ulimit of 1) |
| modu

[jira] [Created] (HBASE-21030) Correct javadoc for append operation

2018-08-09 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21030:
--

 Summary: Correct javadoc for append operation
 Key: HBASE-21030
 URL: https://issues.apache.org/jira/browse/HBASE-21030
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 1.5.0
Reporter: Nihal Jain


The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
code snippet below or 
[Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
{code:java}
  /**
   * Appends values to one or more columns within a single row.
   * 
   * This operation guaranteed atomicity to readers. Appends are done
   * under a single row lock, so write operations to a row are synchronized, and
   * readers are guaranteed to see this operation fully completed.
   *
   * @param append object that specifies the columns and amounts to be used
   *  for the increment operations
   * @throws IOException e
   * @return values of columns after the append operation (maybe null)
   */
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574505#comment-16574505
 ] 

Hadoop QA commented on HBASE-20943:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
43s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}132m 
34s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 1s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}184m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20943 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934916/HBASE-20943-master-v2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e4f21d05bdc0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git rev

[jira] [Commented] (HBASE-20965) Separate region server report requests to new handlers

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574509#comment-16574509
 ] 

Hadoop QA commented on HBASE-20965:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
32s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
47s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} hbase-server: The patch generated 0 new + 77 
unchanged - 1 fixed = 77 total (was 78) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
47s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 58s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}109m 
45s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-20965 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934919/HBASE-20965.branch-2.1.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 30c795083661 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.1 / b2fc0f48f6 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13992/testReport/ |
| Max. process+thread count | 4623 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13992/console |
| Powered by | Apache Ye

[jira] [Commented] (HBASE-21025) Add cache for TableStateManager

2018-08-09 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574515#comment-16574515
 ] 

Duo Zhang commented on HBASE-21025:
---

Any comments? This is a long time TODO in the code, and after HBASE-20881, we 
will make heavy use of this class to determine whether a table is disabled.

> Add cache for TableStateManager
> ---
>
> Key: HBASE-21025
> URL: https://issues.apache.org/jira/browse/HBASE-21025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21025-v1.patch, HBASE-21025.patch
>
>
> After HBASE-20881, we will check whether a table is disabled in SCP, so we 
> need to add cache for it to improve MTTR, and also reduce the request to meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21014) Improve Stochastic Balancer to write HDFS favoured node hints for region primary blocks to avoid destroying data locality if needing to use HDFS Balancer

2018-08-09 Thread Hari Sekhon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574532#comment-16574532
 ] 

Hari Sekhon commented on HBASE-21014:
-

Yes this is what I thought hence I'd already linked HBase-7932 and book 
reference as well as a discussion on the mailing list from some of my 
ex-colleagues from Cloudera who really know their stuff like Harsh J and Lars 
George.

So really what I'm asking for is for the Stochastic Balancer to include the 
hint writes like the FavoredNodeBalancer.

I already have dfs.datanode.block-pinning.enabled = true, it's just not much 
use until the Stochastic Balancer gets this support as I don't want to lose the 
better balancing which is used more often than an hdfs rebalance.

> Improve Stochastic Balancer to write HDFS favoured node hints for region 
> primary blocks to avoid destroying data locality if needing to use HDFS 
> Balancer
> -
>
> Key: HBASE-21014
> URL: https://issues.apache.org/jira/browse/HBASE-21014
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Affects Versions: 1.1.2
>Reporter: Hari Sekhon
>Priority: Major
>
> Improve Stochastic Balancer to include the HDFS region location hints to 
> avoid HDFS Balancer destroying data locality.
> Right now according to a mix of docs, jiras and mailing list info it appears 
> that one must change
> {code:java}
> hbase.master.loadbalancer.class{code}
> to the org.apache.hadoop.hbase.favored.FavoredNodeLoadBalancer as it looks 
> like this functionality is only within FavoredNodeBalancer and not the 
> standard Stochastic Balancer.
> [http://hbase.apache.org/book.html#_hbase_and_hdfs]
> This is not ideal because we'd still like to use all the heuristics and work 
> that has gone in the Stochastic Balancer which I believe right now is the 
> best and most mature HBase balancer.
> See also the linked Jiras and this discussion:
> [http://apache-hbase.679495.n3.nabble.com/HDFS-Balancer-td4086607.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21014) Improve Stochastic Balancer to write HDFS favoured node hints for region primary blocks to avoid destroying data locality if needing to use HDFS Balancer

2018-08-09 Thread Hari Sekhon (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574532#comment-16574532
 ] 

Hari Sekhon edited comment on HBASE-21014 at 8/9/18 9:10 AM:
-

Yes this is what I thought hence I'd already linked HBASE-7932 and book 
reference as well as a discussion on the mailing list from some of my 
ex-colleagues from Cloudera who really know their stuff like Harsh J and Lars 
George.

So really what I'm asking for is for the Stochastic Balancer to include the 
hint writes like the FavoredNodeBalancer.

I already have dfs.datanode.block-pinning.enabled = true, it's just not much 
use until the Stochastic Balancer gets this support as I don't want to lose the 
better balancing which is used more often than an hdfs rebalance.


was (Author: harisekhon):
Yes this is what I thought hence I'd already linked HBase-7932 and book 
reference as well as a discussion on the mailing list from some of my 
ex-colleagues from Cloudera who really know their stuff like Harsh J and Lars 
George.

So really what I'm asking for is for the Stochastic Balancer to include the 
hint writes like the FavoredNodeBalancer.

I already have dfs.datanode.block-pinning.enabled = true, it's just not much 
use until the Stochastic Balancer gets this support as I don't want to lose the 
better balancing which is used more often than an hdfs rebalance.

> Improve Stochastic Balancer to write HDFS favoured node hints for region 
> primary blocks to avoid destroying data locality if needing to use HDFS 
> Balancer
> -
>
> Key: HBASE-21014
> URL: https://issues.apache.org/jira/browse/HBASE-21014
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Affects Versions: 1.1.2
>Reporter: Hari Sekhon
>Priority: Major
>
> Improve Stochastic Balancer to include the HDFS region location hints to 
> avoid HDFS Balancer destroying data locality.
> Right now according to a mix of docs, jiras and mailing list info it appears 
> that one must change
> {code:java}
> hbase.master.loadbalancer.class{code}
> to the org.apache.hadoop.hbase.favored.FavoredNodeLoadBalancer as it looks 
> like this functionality is only within FavoredNodeBalancer and not the 
> standard Stochastic Balancer.
> [http://hbase.apache.org/book.html#_hbase_and_hdfs]
> This is not ideal because we'd still like to use all the heuristics and work 
> that has gone in the Stochastic Balancer which I believe right now is the 
> best and most mature HBase balancer.
> See also the linked Jiras and this discussion:
> [http://apache-hbase.679495.n3.nabble.com/HDFS-Balancer-td4086607.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574533#comment-16574533
 ] 

Hadoop QA commented on HBASE-21012:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 9s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m  
2s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
33s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m  
9s{color} | {color:blue} patch has no errors when building the reference guide. 
See footer for rendered docs, which you should manually inspect. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
31s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}205m 
43s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
53s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}275m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes |

[jira] [Commented] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-09 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574546#comment-16574546
 ] 

Nihal Jain commented on HBASE-21021:


{quote}The test on master branch fails too:
{code:java}
testAppendWithMultipleFamilies(org.apache.hadoop.hbase.regionserver.TestAtomicOperation)
  Time elapsed: 2.859 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testAppendWithMultipleFamilies(TestAtomicOperation.java:166){code}
{quote}
The test failure is due to result not being null in case of master. In 
branch-1, setting {{setReturnResults}} to {{false}} will return null, while in 
master it returns an EMPTY_RESULT now, due to the following line 
[HRegion.java#L7710|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7710].

The test passes on master after considering the change and asserting for not 
{{isEmpty()}}. 

The ordering issue does not exist in master.

> Result returned by Append operation should be ordered
> -
>
> Key: HBASE-21021
> URL: https://issues.apache.org/jira/browse/HBASE-21021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-21021.branch-1.001.patch
>
>
> *Problem:*
> The result returned by the append operation should be ordered. Currently, it 
> returns an unordered list, which may cause problems like if the user tries to 
> perform Result.getValue(byte[] family, byte[] qualifier), even if the 
> returned result has a value corresponding to (family, qualifier), the method 
> may return null as it performs a binary search over the  unsorted result 
> (which should have been sorted actually).
>  
> The result is enumerated by iterating over each entry of tempMemstore hashmap 
> (which will never be ordered) and adding the values (see 
> [HRegion.java#L7882|https://github.com/apache/hbase/blob/1b50fe53724aa62a242b7f64adf7845048df/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7882]).
>  
> *Actual:* The returned result is unordered
> *Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-09 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574546#comment-16574546
 ] 

Nihal Jain edited comment on HBASE-21021 at 8/9/18 9:24 AM:


{quote}The test on master branch fails too:
{code:java}
testAppendWithMultipleFamilies(org.apache.hadoop.hbase.regionserver.TestAtomicOperation)
  Time elapsed: 2.859 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testAppendWithMultipleFamilies(TestAtomicOperation.java:166){code}
{quote}
The test failure is due to result not being null in case of master. In 
branch-1, setting {{setReturnResults}} to {{false}} will return null, while in 
master it returns an EMPTY_RESULT now, due to the following chnage 
[HRegion.java#L7710|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7710]
 introduced in HBASE-16283.

The test passes on master after considering the change and asserting for not 
{{isEmpty()}}. 

The ordering issue does not exist in master.


was (Author: nihaljain.cs):
{quote}The test on master branch fails too:
{code:java}
testAppendWithMultipleFamilies(org.apache.hadoop.hbase.regionserver.TestAtomicOperation)
  Time elapsed: 2.859 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testAppendWithMultipleFamilies(TestAtomicOperation.java:166){code}
{quote}
The test failure is due to result not being null in case of master. In 
branch-1, setting {{setReturnResults}} to {{false}} will return null, while in 
master it returns an EMPTY_RESULT now, due to the following line 
[HRegion.java#L7710|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7710].

The test passes on master after considering the change and asserting for not 
{{isEmpty()}}. 

The ordering issue does not exist in master.

> Result returned by Append operation should be ordered
> -
>
> Key: HBASE-21021
> URL: https://issues.apache.org/jira/browse/HBASE-21021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-21021.branch-1.001.patch
>
>
> *Problem:*
> The result returned by the append operation should be ordered. Currently, it 
> returns an unordered list, which may cause problems like if the user tries to 
> perform Result.getValue(byte[] family, byte[] qualifier), even if the 
> returned result has a value corresponding to (family, qualifier), the method 
> may return null as it performs a binary search over the  unsorted result 
> (which should have been sorted actually).
>  
> The result is enumerated by iterating over each entry of tempMemstore hashmap 
> (which will never be ordered) and adding the values (see 
> [HRegion.java#L7882|https://github.com/apache/hbase/blob/1b50fe53724aa62a242b7f64adf7845048df/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7882]).
>  
> *Actual:* The returned result is unordered
> *Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-09 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574546#comment-16574546
 ] 

Nihal Jain edited comment on HBASE-21021 at 8/9/18 9:30 AM:


{quote}The test on master branch fails too:
{code:java}
testAppendWithMultipleFamilies(org.apache.hadoop.hbase.regionserver.TestAtomicOperation)
  Time elapsed: 2.859 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testAppendWithMultipleFamilies(TestAtomicOperation.java:166){code}
{quote}
The test failure is due to result not being null in case of master. In 
branch-1, setting {{setReturnResults}} to {{false}} will return null, while in 
master it returns an EMPTY_RESULT now, due to the following change 
[HRegion.java#L7710|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7710]
 introduced in HBASE-16283.

The test passes on master after considering the change and asserting for 
returned result {{isEmpty()}}. Hence, the ordering issue does not exist in 
master. 

Not sure whether the behavior change for {{setReturnResults}} is right. I think 
it should still return null as the documentation also says that it may be null. 

{code:java}
 /**
 * Appends values to one or more columns within a single row.
 * 
 * This operation guaranteed atomicity to readers. Appends are done
 * under a single row lock, so write operations to a row are synchronized, and
 * readers are guaranteed to see this operation fully completed.
 *
 * @param append object that specifies the columns and amounts to be used
 * for the increment operations
 * @throws IOException e
 * @return values of columns after the append operation (maybe null)
 */
{code}



was (Author: nihaljain.cs):
{quote}The test on master branch fails too:
{code:java}
testAppendWithMultipleFamilies(org.apache.hadoop.hbase.regionserver.TestAtomicOperation)
  Time elapsed: 2.859 sec  <<< FAILURE!
java.lang.AssertionError: expected null, but was:
at 
org.apache.hadoop.hbase.regionserver.TestAtomicOperation.testAppendWithMultipleFamilies(TestAtomicOperation.java:166){code}
{quote}
The test failure is due to result not being null in case of master. In 
branch-1, setting {{setReturnResults}} to {{false}} will return null, while in 
master it returns an EMPTY_RESULT now, due to the following chnage 
[HRegion.java#L7710|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7710]
 introduced in HBASE-16283.

The test passes on master after considering the change and asserting for not 
{{isEmpty()}}. 

The ordering issue does not exist in master.

> Result returned by Append operation should be ordered
> -
>
> Key: HBASE-21021
> URL: https://issues.apache.org/jira/browse/HBASE-21021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-21021.branch-1.001.patch
>
>
> *Problem:*
> The result returned by the append operation should be ordered. Currently, it 
> returns an unordered list, which may cause problems like if the user tries to 
> perform Result.getValue(byte[] family, byte[] qualifier), even if the 
> returned result has a value corresponding to (family, qualifier), the method 
> may return null as it performs a binary search over the  unsorted result 
> (which should have been sorted actually).
>  
> The result is enumerated by iterating over each entry of tempMemstore hashmap 
> (which will never be ordered) and adding the values (see 
> [HRegion.java#L7882|https://github.com/apache/hbase/blob/1b50fe53724aa62a242b7f64adf7845048df/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7882]).
>  
> *Actual:* The returned result is unordered
> *Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20965) Separate region server report requests to new handlers

2018-08-09 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-20965:
---
   Resolution: Fixed
Fix Version/s: 2.1.1
   Status: Resolved  (was: Patch Available)

Pushed to master, branch-2 and branch-2.1. Thanks [~Yi Mei] for conrtibuting.

> Separate region server report requests to new handlers
> --
>
> Key: HBASE-20965
> URL: https://issues.apache.org/jira/browse/HBASE-20965
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-20965.branch-2.1.001.patch, 
> HBASE-20965.master.001.patch, HBASE-20965.master.002.patch, 
> HBASE-20965.master.003.patch, HBASE-20965.master.004.patch, 
> HBASE-20965.master.005.patch, HBASE-20965.master.006.patch, 
> HBASE-20965.master.007.patch, HBASE-20965.master.008.patch, 
> HBASE-20965.master.009.patch, HBASE-20965.master.010.patch, 
> HBASE-20965.master.011.patch
>
>
> In master rpc scheduler, all rpc requests are executed in a thread pool. This 
> task separates rs report requests to new handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Allan Yang (JIRA)
Allan Yang created HBASE-21031:
--

 Summary: Memory leak if replay edits failed during region opening
 Key: HBASE-21031
 URL: https://issues.apache.org/jira/browse/HBASE-21031
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.1, 2.1.0
Reporter: Allan Yang
Assignee: Allan Yang


Due to HBASE-21029, when replaying edits with a lot of same cells, the memstore 
won't flush,  a exception will throw when all heap space was used:
{code}
2018-08-06 15:52:27,590 ERROR 
[RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
handler.OpenRegionHandler(302): Failed open of 
region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
starting to roll back the global memstore size.
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
at 
org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
at 
org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
at 
org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
at 
org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
at 
org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
at 
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
{code}
After this exception, the memstore did not roll back, and since MSLAB is used, 
all the chunk allocated won't release for ever. Those memory is leak forever...

We need to rollback the memory if open region fails(For now, only global 
memstore size is decreased after failure).

Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
to record how many memory used during replaying. And decrease the global 
memstore size if replay fails. This is not right, since during replaying, we 
may also flush the memstore, the size in the map of replayEditsPerRegion is not 
accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21031:
---
Attachment: memory leak.png

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: memory leak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21031:
---
Attachment: (was: memory leak.png)

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21031:
---
Status: Patch Available  (was: Open)

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.1, 2.1.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21031:
---
Attachment: HBASE-21031.branch-2.0.001.patch

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-21031:
---
Attachment: memoryleak.png

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21026) Fix Backup/Restore command usage bug in book

2018-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574879#comment-16574879
 ] 

Hudson commented on HBASE-21026:


Results for branch master
[build #423 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/423/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/423//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/423//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/423//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Fix Backup/Restore command usage bug in book
> 
>
> Key: HBASE-21026
> URL: https://issues.apache.org/jira/browse/HBASE-21026
> Project: HBase
>  Issue Type: Bug
>  Components: backup&restore, documentation
>Affects Versions: 2.0.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21026.001.patch, HBASE-21026.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574896#comment-16574896
 ] 

Duo Zhang commented on HBASE-18201:
---

+1 for branch-2.1.

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21021) Result returned by Append operation should be ordered

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574984#comment-16574984
 ] 

Ted Yu commented on HBASE-21021:


[~allan163]:
What do you think of Nihal's comment above ?

> Result returned by Append operation should be ordered
> -
>
> Key: HBASE-21021
> URL: https://issues.apache.org/jira/browse/HBASE-21021
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0, 1.5.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-21021.branch-1.001.patch
>
>
> *Problem:*
> The result returned by the append operation should be ordered. Currently, it 
> returns an unordered list, which may cause problems like if the user tries to 
> perform Result.getValue(byte[] family, byte[] qualifier), even if the 
> returned result has a value corresponding to (family, qualifier), the method 
> may return null as it performs a binary search over the  unsorted result 
> (which should have been sorted actually).
>  
> The result is enumerated by iterating over each entry of tempMemstore hashmap 
> (which will never be ordered) and adding the values (see 
> [HRegion.java#L7882|https://github.com/apache/hbase/blob/1b50fe53724aa62a242b7f64adf7845048df/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L7882]).
>  
> *Actual:* The returned result is unordered
> *Expected:* Similar to increment op, the returned result should be ordered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574986#comment-16574986
 ] 

Mike Drob commented on HBASE-21031:
---

Very interesting failure scenario, Allan. Great job diagnosing it.

I think I have feedback for the patch, would you mind uploading to review board?

> Memory leak if replay edits failed during region opening
> 
>
> Key: HBASE-21031
> URL: https://issues.apache.org/jira/browse/HBASE-21031
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.1.0, 2.0.1
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21031.branch-2.0.001.patch, memoryleak.png
>
>
> Due to HBASE-21029, when replaying edits with a lot of same cells, the 
> memstore won't flush,  a exception will throw when all heap space was used:
> {code}
> 2018-08-06 15:52:27,590 ERROR 
> [RS_OPEN_REGION-regionserver/hb-bp10cw4ejoy0a2f3f-009:16020-2] 
> handler.OpenRegionHandler(302): Failed open of 
> region=hbase_test,dffa78,1531227033378.cbf9a2daf3aaa0c7e931e9c9a7b53f41., 
> starting to roll back the global memstore size.
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.hadoop.hbase.regionserver.OnheapChunk.allocateDataBuffer(OnheapChunk.java:41)
> at org.apache.hadoop.hbase.regionserver.Chunk.init(Chunk.java:104)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:226)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:180)
> at 
> org.apache.hadoop.hbase.regionserver.ChunkCreator.getChunk(ChunkCreator.java:163)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.getOrMakeChunk(MemStoreLABImpl.java:273)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:148)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreLABImpl.copyCellInto(MemStoreLABImpl.java:111)
> at 
> org.apache.hadoop.hbase.regionserver.Segment.maybeCloneWithAllocator(Segment.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.maybeCloneWithAllocator(AbstractMemStore.java:287)
> at 
> org.apache.hadoop.hbase.regionserver.AbstractMemStore.add(AbstractMemStore.java:107)
> at org.apache.hadoop.hbase.regionserver.HStore.add(HStore.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.restoreEdit(HRegion.java:5494)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:4608)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:4404)
> {code}
> After this exception, the memstore did not roll back, and since MSLAB is 
> used, all the chunk allocated won't release for ever. Those memory is leak 
> forever...
> We need to rollback the memory if open region fails(For now, only global 
> memstore size is decreased after failure).
> Another problem is that we use replayEditsPerRegion in RegionServerAccounting 
> to record how many memory used during replaying. And decrease the global 
> memstore size if replay fails. This is not right, since during replaying, we 
> may also flush the memstore, the size in the map of replayEditsPerRegion is 
> not accurate at all! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread jinghan xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574992#comment-16574992
 ] 

jinghan xu commented on HBASE-20943:


[~huaxiang] thx for taking care of it!

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943.patch, Screen Shot 2018-07-25 at 
> 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574996#comment-16574996
 ] 

Ted Yu commented on HBASE-20943:


Looks good overall.
{code}
+  PairOfSameType getRegionNumbers();
{code}
getRegionNumbers -> getRegionCounts
And adjust corresponding javadoc

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943.patch, Screen Shot 2018-07-25 at 
> 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575012#comment-16575012
 ] 

Josh Elser commented on HBASE-21012:


{quote}This issue is about the forward compatibility rather than backward 
compatibility. The older hbase can't read hfiles generated by later versions.
{quote}
Sorry! I got it backwards :)

If we don't have a separate reader class (which I assume is the case), you 
could do the opposite of what I suggested. Include a test that reads the newer 
file and make sure that test passes on 1.x branches (running the test on 2.x 
would be essentially pointless, but that's OK I think).

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21031) Memory leak if replay edits failed during region opening

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575027#comment-16575027
 ] 

Hadoop QA commented on HBASE-21031:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.0 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
51s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} branch-2.0 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.0 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
15s{color} | {color:red} hbase-server: The patch generated 14 new + 303 
unchanged - 0 fixed = 317 total (was 303) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 23s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
13s{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 11s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}143m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Null pointer dereference of region in 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion() on 
exception path  Dereferenced at OpenRegionHandler.java:in 
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion() on 
exception path  Dereferenced at OpenRegionHandler.java:[line 308] |
| Failed junit tests | hadoop.hbase.regionserver.TestHRegion |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:6f01af0 |
| JIRA Issue | HBASE-21031 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934976/HBASE-21031.branch-2.0.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b022565dbae1 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision

[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Chia-Ping Tsai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575033#comment-16575033
 ] 

Chia-Ping Tsai commented on HBASE-21012:


{quote}If we don't have a separate reader class (which I assume is the case), 
you could do the opposite of what I suggested. Include a test that reads the 
newer file and make sure that test passes on 1.x branches (running the test on 
2.x would be essentially pointless, but that's OK I think).
{quote}
Nice suggestion boss! I feel it can be implemented in HBASE-21013.

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575046#comment-16575046
 ] 

Josh Elser commented on HBASE-21012:


{quote}I feel it can be implemented in HBASE-21013.
{quote}
SGTM!

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575119#comment-16575119
 ] 

huaxiang sun commented on HBASE-20943:
--

[~jinghanx], np, thanks for the contribution, I did some rebase work. 
[~yuzhih...@gmail.com] v3 addresses your comments, thanks.

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943.patch, Screen Shot 2018-07-25 at 
> 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread huaxiang sun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-20943:
-
Attachment: HBASE-20943-master-v3.patch

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943-master-v3.patch, HBASE-20943.patch, 
> Screen Shot 2018-07-25 at 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575127#comment-16575127
 ] 

Ted Yu commented on HBASE-20943:


lgtm

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943-master-v3.patch, HBASE-20943.patch, 
> Screen Shot 2018-07-25 at 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20941) Create and implement HbckService in master

2018-08-09 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575128#comment-16575128
 ] 

Sean Busbey commented on HBASE-20941:
-

comments on v1 up on reviewboard.

> Create and implement HbckService in master
> --
>
> Key: HBASE-20941
> URL: https://issues.apache.org/jira/browse/HBASE-20941
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Umesh Agashe
>Assignee: Umesh Agashe
>Priority: Major
> Attachments: hbase-20941.master.001.patch
>
>
> Create HbckService in master and implement following methods:
>  # setTableState(): If table state are inconsistent with action/ procedures 
> working on them, sometimes manipulating their states in meta fix things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20965) Separate region server report requests to new handlers

2018-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575191#comment-16575191
 ] 

Hudson commented on HBASE-20965:


Results for branch branch-2.1
[build #163 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/163/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/163//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/163//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/163//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Separate region server report requests to new handlers
> --
>
> Key: HBASE-20965
> URL: https://issues.apache.org/jira/browse/HBASE-20965
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-20965.branch-2.1.001.patch, 
> HBASE-20965.master.001.patch, HBASE-20965.master.002.patch, 
> HBASE-20965.master.003.patch, HBASE-20965.master.004.patch, 
> HBASE-20965.master.005.patch, HBASE-20965.master.006.patch, 
> HBASE-20965.master.007.patch, HBASE-20965.master.008.patch, 
> HBASE-20965.master.009.patch, HBASE-20965.master.010.patch, 
> HBASE-20965.master.011.patch
>
>
> In master rpc scheduler, all rpc requests are executed in a thread pool. This 
> task separates rs report requests to new handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20965) Separate region server report requests to new handlers

2018-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575244#comment-16575244
 ] 

Hudson commented on HBASE-20965:


Results for branch branch-2
[build #1085 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1085/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1085//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1085//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1085//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Separate region server report requests to new handlers
> --
>
> Key: HBASE-20965
> URL: https://issues.apache.org/jira/browse/HBASE-20965
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-20965.branch-2.1.001.patch, 
> HBASE-20965.master.001.patch, HBASE-20965.master.002.patch, 
> HBASE-20965.master.003.patch, HBASE-20965.master.004.patch, 
> HBASE-20965.master.005.patch, HBASE-20965.master.006.patch, 
> HBASE-20965.master.007.patch, HBASE-20965.master.008.patch, 
> HBASE-20965.master.009.patch, HBASE-20965.master.010.patch, 
> HBASE-20965.master.011.patch
>
>
> In master rpc scheduler, all rpc requests are executed in a thread pool. This 
> task separates rs report requests to new handlers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21014) Improve Stochastic Balancer to write HDFS favoured node hints for region primary blocks to avoid destroying data locality if needing to use HDFS Balancer

2018-08-09 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575249#comment-16575249
 ] 

Toshihiro Suzuki commented on HBASE-21014:
--

{code}
So really what I'm asking for is for the Stochastic Balancer to include the 
hint writes like the FavoredNodeBalancer.
{code}
Could you please tell us the details of your idea?

> Improve Stochastic Balancer to write HDFS favoured node hints for region 
> primary blocks to avoid destroying data locality if needing to use HDFS 
> Balancer
> -
>
> Key: HBASE-21014
> URL: https://issues.apache.org/jira/browse/HBASE-21014
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Affects Versions: 1.1.2
>Reporter: Hari Sekhon
>Priority: Major
>
> Improve Stochastic Balancer to include the HDFS region location hints to 
> avoid HDFS Balancer destroying data locality.
> Right now according to a mix of docs, jiras and mailing list info it appears 
> that one must change
> {code:java}
> hbase.master.loadbalancer.class{code}
> to the org.apache.hadoop.hbase.favored.FavoredNodeLoadBalancer as it looks 
> like this functionality is only within FavoredNodeBalancer and not the 
> standard Stochastic Balancer.
> [http://hbase.apache.org/book.html#_hbase_and_hdfs]
> This is not ideal because we'd still like to use all the heuristics and work 
> that has gone in the Stochastic Balancer which I believe right now is the 
> best and most mature HBase balancer.
> See also the linked Jiras and this discussion:
> [http://apache-hbase.679495.n3.nabble.com/HDFS-Balancer-td4086607.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21014) Improve Stochastic Balancer to write HDFS favoured node hints for region primary blocks to avoid destroying data locality if needing to use HDFS Balancer

2018-08-09 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575249#comment-16575249
 ] 

Toshihiro Suzuki edited comment on HBASE-21014 at 8/9/18 6:24 PM:
--

{quote}
So really what I'm asking for is for the Stochastic Balancer to include the 
hint writes like the FavoredNodeBalancer.
{quote}
Could you please tell us the details of your idea?


was (Author: brfrn169):
{code}
So really what I'm asking for is for the Stochastic Balancer to include the 
hint writes like the FavoredNodeBalancer.
{code}
Could you please tell us the details of your idea?

> Improve Stochastic Balancer to write HDFS favoured node hints for region 
> primary blocks to avoid destroying data locality if needing to use HDFS 
> Balancer
> -
>
> Key: HBASE-21014
> URL: https://issues.apache.org/jira/browse/HBASE-21014
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Affects Versions: 1.1.2
>Reporter: Hari Sekhon
>Priority: Major
>
> Improve Stochastic Balancer to include the HDFS region location hints to 
> avoid HDFS Balancer destroying data locality.
> Right now according to a mix of docs, jiras and mailing list info it appears 
> that one must change
> {code:java}
> hbase.master.loadbalancer.class{code}
> to the org.apache.hadoop.hbase.favored.FavoredNodeLoadBalancer as it looks 
> like this functionality is only within FavoredNodeBalancer and not the 
> standard Stochastic Balancer.
> [http://hbase.apache.org/book.html#_hbase_and_hdfs]
> This is not ideal because we'd still like to use all the heuristics and work 
> that has gone in the Stochastic Balancer which I believe right now is the 
> best and most mature HBase balancer.
> See also the linked Jiras and this discussion:
> [http://apache-hbase.679495.n3.nabble.com/HDFS-Balancer-td4086607.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21032) ScanResponse returns a partial result per cell

2018-08-09 Thread Andrey Elenskiy (JIRA)
Andrey Elenskiy created HBASE-21032:
---

 Summary: ScanResponse returns a partial result per cell
 Key: HBASE-21032
 URL: https://issues.apache.org/jira/browse/HBASE-21032
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 2.1.0
 Environment: HBase 2.1.0

Hadoop 2.8.4

Java 8
Reporter: Andrey Elenskiy
 Attachments: App.java

I have a long row with a bunch of columns that I'm scanning with 
setAllowPartialResults(true). In the response I'm getting the first partial 
being around 2MB while all of the consequent ones being 1 column per partial. 
After digging more, I found that each of those single column partials are 
preceded by a heartbeat response (zero cells). This results in two request per 
column to a regionserver.

I've attached code to reproduce it on hbase version 2.1.0 (it works as expected 
on 2.0.0 and 2.0.1).

[^App.java]

I'm fairly certain it's a serverside issue as 
[gohbase|https://github.com/tsuna/gohbase] client is having the same issue. I 
have not tried to reproduce this with multi-row scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575304#comment-16575304
 ] 

Hadoop QA commented on HBASE-20943:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
30s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
41s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m 
15s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
59s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-20943 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935003/HBASE-20943-master-v3.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 4c6ec017af0a 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revis

[jira] [Commented] (HBASE-21025) Add cache for TableStateManager

2018-08-09 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575350#comment-16575350
 ] 

stack commented on HBASE-21025:
---

Looks good. For the cache, if a table is removed, we for sure clear its entry 
in the cache? Should the cache be soft reference? If making new patch, change 
name from   private final ConcurrentMap tn2State = 
new ConcurrentHashMap<>(); to tableName2State.

Yeah, this is important change. We need it bad.

> Add cache for TableStateManager
> ---
>
> Key: HBASE-21025
> URL: https://issues.apache.org/jira/browse/HBASE-21025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21025-v1.patch, HBASE-21025.patch
>
>
> After HBASE-20881, we will check whether a table is disabled in SCP, so we 
> need to add cache for it to improve MTTR, and also reduce the request to meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18477) Umbrella JIRA for HBase Read Replica clusters

2018-08-09 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575354#comment-16575354
 ] 

Hudson commented on HBASE-18477:


Results for branch HBASE-18477
[build #290 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/290/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/290//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/290//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/290//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-18477/290//artifact/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> Umbrella JIRA for HBase Read Replica clusters
> -
>
> Key: HBASE-18477
> URL: https://issues.apache.org/jira/browse/HBASE-18477
> Project: HBase
>  Issue Type: New Feature
>Reporter: Zach York
>Assignee: Zach York
>Priority: Major
> Attachments: HBase Read-Replica Clusters Scope doc.docx, HBase 
> Read-Replica Clusters Scope doc.pdf, HBase Read-Replica Clusters Scope 
> doc_v2.docx, HBase Read-Replica Clusters Scope doc_v2.pdf
>
>
> Recently, changes (such as HBASE-17437) have unblocked HBase to run with a 
> root directory external to the cluster (such as in Amazon S3). This means 
> that the data is stored outside of the cluster and can be accessible after 
> the cluster has been terminated. One use case that is often asked about is 
> pointing multiple clusters to one root directory (sharing the data) to have 
> read resiliency in the case of a cluster failure.
>  
> This JIRA is an umbrella JIRA to contain all the tasks necessary to create a 
> read-replica HBase cluster that is pointed at the same root directory.
>  
> This requires making the Read-Replica cluster Read-Only (no metadata 
> operation or data operations).
> Separating the hbase:meta table for each cluster (Otherwise HBase gets 
> confused with multiple clusters trying to update the meta table with their ip 
> addresses)
> Adding refresh functionality for the meta table to ensure new metadata is 
> picked up on the read replica cluster.
> Adding refresh functionality for HFiles for a given table to ensure new data 
> is picked up on the read replica cluster.
>  
> This can be used with any existing cluster that is backed by an external 
> filesystem.
>  
> Please note that this feature is still quite manual (with the potential for 
> automation later).
>  
> More information on this particular feature can be found here: 
> https://aws.amazon.com/blogs/big-data/setting-up-read-replica-clusters-with-hbase-on-amazon-s3/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21033) Separate StoreHeap from StoreFileHeap

2018-08-09 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created HBASE-21033:
-

 Summary: Separate StoreHeap from StoreFileHeap
 Key: HBASE-21033
 URL: https://issues.apache.org/jira/browse/HBASE-21033
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Hofhansl


Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
Store level.

This is various problems:
 # Some incorrect method usage can only be deduced at runtime via runtime 
exception.
 # In profiling sessions it's hard to distinguish the two.
 # It's just not clean :)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21033) Separate StoreHeap from StoreFileHeap

2018-08-09 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-21033:
--
Attachment: 21033-branch-1.txt

> Separate StoreHeap from StoreFileHeap
> -
>
> Key: HBASE-21033
> URL: https://issues.apache.org/jira/browse/HBASE-21033
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 21033-branch-1.txt
>
>
> Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
> level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
> Store level.
> This is various problems:
>  # Some incorrect method usage can only be deduced at runtime via runtime 
> exception.
>  # In profiling sessions it's hard to distinguish the two.
>  # It's just not clean :)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21033) Separate StoreHeap from StoreFileHeap

2018-08-09 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575420#comment-16575420
 ] 

Lars Hofhansl commented on HBASE-21033:
---

Here's a sample patch for DISCUSSION.

It looks big, but it basically just renames 3 files and adds a simple subclass 
that implements InternalScanner.

Note that this punts on dealing with ReversedKeyValueHeap. Technically we 
should have two of those as well, but that would mean duplication of code. Open 
to more changes.

This has annoyed me for quite a while... If I'm the only one, I'm happy to 
close this.

> Separate StoreHeap from StoreFileHeap
> -
>
> Key: HBASE-21033
> URL: https://issues.apache.org/jira/browse/HBASE-21033
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 21033-branch-1.txt
>
>
> Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
> level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
> Store level.
> This is various problems:
>  # Some incorrect method usage can only be deduced at runtime via runtime 
> exception.
>  # In profiling sessions it's hard to distinguish the two.
>  # It's just not clean :)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-08-09 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575422#comment-16575422
 ] 

Zach York commented on HBASE-20952:
---

{quote}What do we need in terms of durability guarantees? Is sync-after-write 
necessary?
{quote}
[~mdrob] you mean having to do a manual 'sync' from HBase's perspective (like I 
think we do now) or from the WAL perspective. I think a WAL needs to be durable 
and consistent (from a read after create standpoint). That's more of an 
implementation detail of the specific WAL backend I would think (not something 
that would find it's way into the high level interface). Or maybe I'm 
misunderstanding what piece you're talking about.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21033) Separate the StoreHeap from StoreFileHeap

2018-08-09 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-21033:
--
Summary: Separate the StoreHeap from StoreFileHeap  (was: Separate 
StoreHeap from StoreFileHeap)

> Separate the StoreHeap from StoreFileHeap
> -
>
> Key: HBASE-21033
> URL: https://issues.apache.org/jira/browse/HBASE-21033
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 21033-branch-1.txt
>
>
> Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
> level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
> Store level.
> This is various problems:
>  # Some incorrect method usage can only be deduced at runtime via runtime 
> exception.
>  # In profiling sessions it's hard to distinguish the two.
>  # It's just not clean :)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20734) Colocate recovered edits directory with hbase.wal.dir

2018-08-09 Thread Zach York (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575480#comment-16575480
 ] 

Zach York commented on HBASE-20734:
---

Finally have some time to work on this again. I guess a single exists() call 
per region isn't too expensive. I'll implement that.

Thanks for the input [~apurtell]. This should target pre-HBase 3 since right 
now we're in a hybrid state.

> Colocate recovered edits directory with hbase.wal.dir
> -
>
> Key: HBASE-20734
> URL: https://issues.apache.org/jira/browse/HBASE-20734
> Project: HBase
>  Issue Type: Improvement
>  Components: MTTR, Recovery, wal
>Reporter: Ted Yu
>Assignee: Zach York
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20734.branch-1.001.patch
>
>
> During investigation of HBASE-20723, I realized that we wouldn't get the best 
> performance when hbase.wal.dir is configured to be on different (fast) media 
> than hbase rootdir w.r.t. recovered edits since recovered edits directory is 
> currently under rootdir.
> Such setup may not result in fast recovery when there is region server 
> failover.
> This issue is to find proper (hopefully backward compatible) way in 
> colocating recovered edits directory with hbase.wal.dir .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21033) Separate the StoreHeap from StoreFileHeap

2018-08-09 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575502#comment-16575502
 ] 

Sakthi commented on HBASE-21033:


[~lhofhansl] do you want to assign it to yourself?

> Separate the StoreHeap from StoreFileHeap
> -
>
> Key: HBASE-21033
> URL: https://issues.apache.org/jira/browse/HBASE-21033
> Project: HBase
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 21033-branch-1.txt
>
>
> Currently KeyValueHeap is used for both, heaps of StoreScanners at the Region 
> level as well as heaps of StoreFileScanners (and a MemstoreScanner) at the 
> Store level.
> This is various problems:
>  # Some incorrect method usage can only be deduced at runtime via runtime 
> exception.
>  # In profiling sessions it's hard to distinguish the two.
>  # It's just not clean :)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21032) ScanResponses contain only one cell each

2018-08-09 Thread Andrey Elenskiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Elenskiy updated HBASE-21032:

Summary: ScanResponses contain only one cell each  (was: ScanResponse 
returns a partial result per cell)

> ScanResponses contain only one cell each
> 
>
> Key: HBASE-21032
> URL: https://issues.apache.org/jira/browse/HBASE-21032
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.1.0
> Environment: HBase 2.1.0
> Hadoop 2.8.4
> Java 8
>Reporter: Andrey Elenskiy
>Priority: Major
> Attachments: App.java
>
>
> I have a long row with a bunch of columns that I'm scanning with 
> setAllowPartialResults(true). In the response I'm getting the first partial 
> being around 2MB while all of the consequent ones being 1 column per partial. 
> After digging more, I found that each of those single column partials are 
> preceded by a heartbeat response (zero cells). This results in two request 
> per column to a regionserver.
> I've attached code to reproduce it on hbase version 2.1.0 (it works as 
> expected on 2.0.0 and 2.0.1).
> [^App.java]
> I'm fairly certain it's a serverside issue as 
> [gohbase|https://github.com/tsuna/gohbase] client is having the same issue. I 
> have not tried to reproduce this with multi-row scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21032) ScanResponses contain only one cell each

2018-08-09 Thread Andrey Elenskiy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Elenskiy updated HBASE-21032:

Description: 
I have a long row with a bunch of columns that I'm scanning with 
setAllowPartialResults(true). In the response I'm getting the first partial 
ScanResponse being around 2MB with multiple cells while all of the consequent 
ones being 1 cell per ScanResponse. After digging more, I found that each of 
those single cell ScanResponse partials are preceded by a heartbeat (zero 
cells). This results in two requests per cell to a regionserver.

I've attached code to reproduce it on hbase version 2.1.0 (it works as expected 
on 2.0.0 and 2.0.1).

[^App.java]

I'm fairly certain it's a serverside issue as 
[gohbase|https://github.com/tsuna/gohbase] client is having the same issue. I 
have not tried to reproduce this with multi-row scan.

  was:
I have a long row with a bunch of columns that I'm scanning with 
setAllowPartialResults(true). In the response I'm getting the first partial 
being around 2MB while all of the consequent ones being 1 column per partial. 
After digging more, I found that each of those single column partials are 
preceded by a heartbeat response (zero cells). This results in two request per 
column to a regionserver.

I've attached code to reproduce it on hbase version 2.1.0 (it works as expected 
on 2.0.0 and 2.0.1).

[^App.java]

I'm fairly certain it's a serverside issue as 
[gohbase|https://github.com/tsuna/gohbase] client is having the same issue. I 
have not tried to reproduce this with multi-row scan.


> ScanResponses contain only one cell each
> 
>
> Key: HBASE-21032
> URL: https://issues.apache.org/jira/browse/HBASE-21032
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.1.0
> Environment: HBase 2.1.0
> Hadoop 2.8.4
> Java 8
>Reporter: Andrey Elenskiy
>Priority: Major
> Attachments: App.java
>
>
> I have a long row with a bunch of columns that I'm scanning with 
> setAllowPartialResults(true). In the response I'm getting the first partial 
> ScanResponse being around 2MB with multiple cells while all of the consequent 
> ones being 1 cell per ScanResponse. After digging more, I found that each of 
> those single cell ScanResponse partials are preceded by a heartbeat (zero 
> cells). This results in two requests per cell to a regionserver.
> I've attached code to reproduce it on hbase version 2.1.0 (it works as 
> expected on 2.0.0 and 2.0.1).
> [^App.java]
> I'm fairly certain it's a serverside issue as 
> [gohbase|https://github.com/tsuna/gohbase] client is having the same issue. I 
> have not tried to reproduce this with multi-row scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately when cleaner chore is disabled

2018-08-09 Thread Tak Lon (Stephen) Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575634#comment-16575634
 ] 

Tak Lon (Stephen) Wu commented on HBASE-21011:
--

to [~yuzhih...@gmail.com] [~anoop.hbase], do you guys have any comments on it ? 
sorry for pulling you guys in, but I found you were the reviewers for 
HBASE-17280.

> Provide CLI option to run oldwals and hfiles cleaner separately when cleaner 
> chore is disabled
> --
>
> Key: HBASE-21011
> URL: https://issues.apache.org/jira/browse/HBASE-21011
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, Client
>Affects Versions: 3.0.0, 1.4.6, 2.1.1
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
> Attachments: HBASE-21011.master.001.patch, 
> HBASE-21011.master.002.patch, HBASE-21011.master.003.patch, 
> HBASE-21011.master.004.patch
>
>
> There is a corner case when cleaner chore for HFiles and oldwals is disabled, 
> admin/user needs to manually execute admin command {{cleaner_chore_run}} to 
> clean the old HFiles and oldwals. Existing logic of {{cleaner_chore_run}} is 
> to [firstly trigger the HFiles cleaner and then oldwals 
> cleaner|https://github.com/taklwu/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java#L1414-L1420],
>  and only return succeed if both completes. 
> but when running this {{cleaner_chore_run}} command, there is a potential use 
> case that admin would like trigger the cleaner for only oldwals or hfiles but 
> still keep the automatic cleaner chore disabled. So, this change aims to 
> provide support for this corner case, and provide flexibility for those user 
> with cleaner chore disabled by default to execute admin CLI to run oldwals 
> and HFiles cleaning procedure individually.
> NOTE that {{cleaner_chore_run}} was introduced in HBASE-17280, this patch 
> added options 'hfiles' and 'oldwals' to it. Also fix default behavior of 
> {{cleaner_chore_run}} will be only ran when cleaner chore is set to disabled, 
> e.g. the proposed admin CLI options are
> {noformat}
> hbase> cleaner_chore_run   # this was introduced in HBASE-17280, 
> but changed the behavior to only ran when cleaner chore is set to disabled
> hbase> cleaner_chore_run 'hfiles'  # added, ran when cleaner chore is set 
> to disabled
> hbase> cleaner_chore_run 'oldwals' # added, ran when cleaner chore is set 
> to disabled
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21034) Add new throttle type: read/write capacity unit

2018-08-09 Thread Yi Mei (JIRA)
Yi Mei created HBASE-21034:
--

 Summary: Add new throttle type: read/write capacity unit
 Key: HBASE-21034
 URL: https://issues.apache.org/jira/browse/HBASE-21034
 Project: HBase
  Issue Type: Task
Affects Versions: 3.0.0, 2.2.0
Reporter: Yi Mei


Add new throttle type: read/write capacity unit like DynamoDB.

One read capacity unit represents that read up to 1K data per time unit. If 
data size is more than 1K, the consume additional read capacity units.

One write capacity unit represents that one write for an item up to 1 KB in 
size per time unit. If data size is more than 1K, the consume additional write 
capacity units.

For example, 100 read capacity units per second means that, HBase user can read 
100 times for 1K data in every second, or 50 times for 2K data in every second 
and so on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21034) Add new throttle type: read/write capacity unit

2018-08-09 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei reassigned HBASE-21034:
--

Assignee: Yi Mei

> Add new throttle type: read/write capacity unit
> ---
>
> Key: HBASE-21034
> URL: https://issues.apache.org/jira/browse/HBASE-21034
> Project: HBase
>  Issue Type: Task
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
>
> Add new throttle type: read/write capacity unit like DynamoDB.
> One read capacity unit represents that read up to 1K data per time unit. If 
> data size is more than 1K, the consume additional read capacity units.
> One write capacity unit represents that one write for an item up to 1 KB in 
> size per time unit. If data size is more than 1K, the consume additional 
> write capacity units.
> For example, 100 read capacity units per second means that, HBase user can 
> read 100 times for 1K data in every second, or 50 times for 2K data in every 
> second and so on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16707) [Umbrella] Improve throttling feature for production usage

2018-08-09 Thread Yi Mei (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575677#comment-16575677
 ] 

Yi Mei commented on HBASE-16707:


Hi, [~zghaobac] I will take this task and separate it to several sub tasks. 
Thanks.

> [Umbrella] Improve throttling feature for production usage
> --
>
> Key: HBASE-16707
> URL: https://issues.apache.org/jira/browse/HBASE-16707
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Guanghao Zhang
>Priority: Major
>
> HBASE-11598 add rpc throttling feature and did a great initial work there. We 
> plan to use throttling in our production cluster and did some improvements 
> for it. From the user mail list, I found that there are other users used 
> throttling feature, too. I thought it is time to contribute our work to 
> community, include:
> 1. Add shell cmd to start/stop throttling.
> 2. Add metrics for throttling request.
> 3. Basic UI support in master/regionserver.
> 4. Handle throttling exception in client.
> 5. Add more throttle types like DynamoDB, use read/write capacity unit to 
> throttle.
> 6. Support soft limit, user can over consume his quota when regionserver has 
> available capacity because other users not consume at the same time.
> 7. ... ...
> I thought some of these improvements are useful. So open an umbrella issue to 
> track. Suggestions and discussions are welcomed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-09 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated HBASE-21028:

Attachment: HBASE-21028.patch
Status: Patch Available  (was: Open)

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-09 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575680#comment-16575680
 ] 

Hadoop QA commented on HBASE-21028:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HBASE-21028 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.7.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-21028 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12935065/HBASE-21028.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13995/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Chia-Ping Tsai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575702#comment-16575702
 ] 

Chia-Ping Tsai commented on HBASE-21012:


 005.patch LGTM.

Since the compatibility is critical issue for hbase, I will commit the patch to 
all 2.x+ branches tomorrow. FYI [~stack] [~Apache9]

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18201:
--
Fix Version/s: 2.1.1

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575722#comment-16575722
 ] 

Reid Chan commented on HBASE-18201:
---

Pushed to master, branch-2 and branch-2.1.

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-18201:
--
  Resolution: Resolved
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575723#comment-16575723
 ] 

Reid Chan commented on HBASE-18201:
---

Thanks for contribution, [~brandboat]

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Chia-Ping Tsai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575724#comment-16575724
 ] 

Chia-Ping Tsai commented on HBASE-18201:


+1 Nice patch [~brandboat]!

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575725#comment-16575725
 ] 

Reid Chan commented on HBASE-18201:
---

I just tried cherry-pick to branch-1, it may need some more works.
You can file an issue to backport this improvement, FYI [~brandboat]

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21034) Add new throttle type: read/write capacity unit

2018-08-09 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-21034:
---
Issue Type: Sub-task  (was: Task)
Parent: HBASE-16707

> Add new throttle type: read/write capacity unit
> ---
>
> Key: HBASE-21034
> URL: https://issues.apache.org/jira/browse/HBASE-21034
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Yi Mei
>Assignee: Yi Mei
>Priority: Major
>
> Add new throttle type: read/write capacity unit like DynamoDB.
> One read capacity unit represents that read up to 1K data per time unit. If 
> data size is more than 1K, the consume additional read capacity units.
> One write capacity unit represents that one write for an item up to 1 KB in 
> size per time unit. If data size is more than 1K, the consume additional 
> write capacity units.
> For example, 100 read capacity units per second means that, HBase user can 
> read 100 times for 1K data in every second, or 50 times for 2K data in every 
> second and so on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18201) add UT and docs for DataBlockEncodingTool

2018-08-09 Thread Kuan-Po Tseng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575728#comment-16575728
 ] 

Kuan-Po Tseng commented on HBASE-18201:
---

[~reidchan] OK, I will find a day to backport this to branch-1, thanks everyone.

> add UT and docs for DataBlockEncodingTool
> -
>
> Key: HBASE-18201
> URL: https://issues.apache.org/jira/browse/HBASE-18201
> Project: HBase
>  Issue Type: Sub-task
>  Components: tooling
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-18201.master.001.patch, 
> HBASE-18201.master.002.patch, HBASE-18201.master.002.patch, 
> HBASE-18201.master.003.patch, HBASE-18201.master.004.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.005.patch, 
> HBASE-18201.master.005.patch, HBASE-18201.master.006.patch, 
> HBASE-18201.master.006.patch
>
>
> There is no example, documents, or tests for DataBlockEncodingTool. We should 
> have it friendly if any use case exists. Otherwise, we should just get rid of 
> it because DataBlockEncodingTool presumes that the implementation of cell 
> returned from DataBlockEncoder is KeyValue. The presume may obstruct the 
> cleanup of KeyValue references in the code base of read/write path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21032) ScanResponses contain only one cell each

2018-08-09 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575727#comment-16575727
 ] 

Reid Chan commented on HBASE-21032:
---

Thanks for this report Andrey [~timoha], so, do you expect the partial results 
could be more?

> ScanResponses contain only one cell each
> 
>
> Key: HBASE-21032
> URL: https://issues.apache.org/jira/browse/HBASE-21032
> Project: HBase
>  Issue Type: Bug
>  Components: Scanners
>Affects Versions: 2.1.0
> Environment: HBase 2.1.0
> Hadoop 2.8.4
> Java 8
>Reporter: Andrey Elenskiy
>Priority: Major
> Attachments: App.java
>
>
> I have a long row with a bunch of columns that I'm scanning with 
> setAllowPartialResults(true). In the response I'm getting the first partial 
> ScanResponse being around 2MB with multiple cells while all of the consequent 
> ones being 1 cell per ScanResponse. After digging more, I found that each of 
> those single cell ScanResponse partials are preceded by a heartbeat (zero 
> cells). This results in two requests per cell to a regionserver.
> I've attached code to reproduce it on hbase version 2.1.0 (it works as 
> expected on 2.0.0 and 2.0.1).
> [^App.java]
> I'm fairly certain it's a serverside issue as 
> [gohbase|https://github.com/tsuna/gohbase] client is having the same issue. I 
> have not tried to reproduce this with multi-row scan.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21011) Provide CLI option to run oldwals and hfiles cleaner separately when cleaner chore is disabled

2018-08-09 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575733#comment-16575733
 ] 

Ted Yu commented on HBASE-21011:


I may not have time to fully digest the proposal here.

The scenario seems to target a corner case (as mentioned in description).
I hope Reid can take another look at the latest patch (since he seems to have 
strong opinion).

Thanks

> Provide CLI option to run oldwals and hfiles cleaner separately when cleaner 
> chore is disabled
> --
>
> Key: HBASE-21011
> URL: https://issues.apache.org/jira/browse/HBASE-21011
> Project: HBase
>  Issue Type: Improvement
>  Components: Admin, Client
>Affects Versions: 3.0.0, 1.4.6, 2.1.1
>Reporter: Tak Lon (Stephen) Wu
>Assignee: Tak Lon (Stephen) Wu
>Priority: Minor
> Attachments: HBASE-21011.master.001.patch, 
> HBASE-21011.master.002.patch, HBASE-21011.master.003.patch, 
> HBASE-21011.master.004.patch
>
>
> There is a corner case when cleaner chore for HFiles and oldwals is disabled, 
> admin/user needs to manually execute admin command {{cleaner_chore_run}} to 
> clean the old HFiles and oldwals. Existing logic of {{cleaner_chore_run}} is 
> to [firstly trigger the HFiles cleaner and then oldwals 
> cleaner|https://github.com/taklwu/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java#L1414-L1420],
>  and only return succeed if both completes. 
> but when running this {{cleaner_chore_run}} command, there is a potential use 
> case that admin would like trigger the cleaner for only oldwals or hfiles but 
> still keep the automatic cleaner chore disabled. So, this change aims to 
> provide support for this corner case, and provide flexibility for those user 
> with cleaner chore disabled by default to execute admin CLI to run oldwals 
> and HFiles cleaning procedure individually.
> NOTE that {{cleaner_chore_run}} was introduced in HBASE-17280, this patch 
> added options 'hfiles' and 'oldwals' to it. Also fix default behavior of 
> {{cleaner_chore_run}} will be only ran when cleaner chore is set to disabled, 
> e.g. the proposed admin CLI options are
> {noformat}
> hbase> cleaner_chore_run   # this was introduced in HBASE-17280, 
> but changed the behavior to only ran when cleaner chore is set to disabled
> hbase> cleaner_chore_run 'hfiles'  # added, ran when cleaner chore is set 
> to disabled
> hbase> cleaner_chore_run 'oldwals' # added, ran when cleaner chore is set 
> to disabled
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575736#comment-16575736
 ] 

stack commented on HBASE-21012:
---

+1 on patch.  +1 for branch-2.0. Copy the doc changes and paste as RN for this 
issue. Thanks.

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-09 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated HBASE-21028:

Attachment: HBASE-21028-branch-1.3.patch

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21028) Backport HBASE-18633 to branch-1.3

2018-08-09 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated HBASE-21028:

Attachment: (was: HBASE-21028.patch)

> Backport HBASE-18633 to branch-1.3
> --
>
> Key: HBASE-21028
> URL: https://issues.apache.org/jira/browse/HBASE-21028
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 1.3.2
>Reporter: Daniel Wong
>Priority: Minor
> Fix For: 1.3.3
>
> Attachments: HBASE-21028-branch-1.3.patch
>
>
> The logging improvements in HBASE-18633 would give greater visibility on 
> systems in 1.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20952) Re-visit the WAL API

2018-08-09 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575763#comment-16575763
 ] 

Mike Drob commented on HBASE-20952:
---

AIUI hbase can block it's writes pending an acknowledgement from the storage 
implementation. This makes sense if we want to give operators the ability to 
gain some performance in exchange for some durability risk (flush v sync). I'm 
not actually sure what my intent was anymore, maybe it will come back to me 
later.

> Re-visit the WAL API
> 
>
> Key: HBASE-20952
> URL: https://issues.apache.org/jira/browse/HBASE-20952
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Priority: Major
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21025) Add cache for TableStateManager

2018-08-09 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575779#comment-16575779
 ] 

Duo Zhang commented on HBASE-21025:
---

{quote}
For the cache, if a table is removed, we for sure clear its entry in the cache?
{quote}

You can see the code in setDeletedTable, we will clear the cache in a finally 
block, no matter whether the meta deletion is succeeded. I also did the same 
thing when update meta, if we fail then clear the cache. This is important, as 
we do not know if we have successfully updated meta or not when there is an 
exception, so the safe way is to clear the cache, itherwise there maybe 
inconsistency. And next time we will read it directly from meta.

Let me change the name and upload a new. Do we also need this for branch-2.0?

> Add cache for TableStateManager
> ---
>
> Key: HBASE-21025
> URL: https://issues.apache.org/jira/browse/HBASE-21025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21025-v1.patch, HBASE-21025.patch
>
>
> After HBASE-20881, we will check whether a table is disabled in SCP, so we 
> need to add cache for it to improve MTTR, and also reduce the request to meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Kuan-Po Tseng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuan-Po Tseng updated HBASE-21012:
--
Release Note: HFiles generated by 2.0.0, 2.0.1, 2.1.0 are not forward 
compatible to 1.4.6-, 1.3.2.1-, 1.2.6.1-, and other inactive releases. Why 
HFile lose compatability is hbase in new versions (2.0.0, 2.0.1, 2.1.0) use 
protobuf to serialize/deserialize TimeRangeTracker (TRT) while old versions use 
DataInput/DataOutput. To solve this, We have to put HBASE-21012 to 2.x and put 
HBASE-21013 in 1.x. For more information, please check HBASE-21008.

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Kuan-Po Tseng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575780#comment-16575780
 ] 

Kuan-Po Tseng commented on HBASE-21012:
---

{quote}+1 on patch. +1 for branch-2.0. Copy the doc changes and paste as RN for 
this issue. Thanks.
{quote}
Done. Thanks for your comment.

 

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21012) Revert the change of serializing TimeRangeTracker

2018-08-09 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575785#comment-16575785
 ] 

Duo Zhang commented on HBASE-21012:
---

+1.

> Revert the change of serializing TimeRangeTracker
> -
>
> Key: HBASE-21012
> URL: https://issues.apache.org/jira/browse/HBASE-21012
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Kuan-Po Tseng
>Priority: Critical
> Fix For: 3.0.0, 2.0.2, 2.1.1
>
> Attachments: HBASE-21012.master.001.patch, 
> HBASE-21012.master.002.patch, HBASE-21012.master.003.patch, 
> HBASE-21012.master.003.patch, HBASE-21012.master.004.patch, 
> HBASE-21012.master.005.patch
>
>
> HBASE-18754 change the serialization of TimeRangeTracker from "manual way" to 
> protobuf. However, the change breaks the backward compatibility of hfile. We 
> should revert the change ASAP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21025) Add cache for TableStateManager

2018-08-09 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21025:
--
Attachment: HBASE-21025-v2.patch

> Add cache for TableStateManager
> ---
>
> Key: HBASE-21025
> URL: https://issues.apache.org/jira/browse/HBASE-21025
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1
>
> Attachments: HBASE-21025-v1.patch, HBASE-21025-v2.patch, 
> HBASE-21025.patch
>
>
> After HBASE-20881, we will check whether a table is disabled in SCP, so we 
> need to add cache for it to improve MTTR, and also reduce the request to meta.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21030) Correct javadoc for append operation

2018-08-09 Thread Subrat Mishra (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575807#comment-16575807
 ] 

Subrat Mishra commented on HBASE-21030:
---

Hi Nihal,

I think it's a typo instead of "increment" it should be "append"
@param append object that specifies the columns and amounts to be used
   *  for the append operations
Thanks.

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Priority: Minor
>  Labels: beginner, beginners
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21030) Correct javadoc for append operation

2018-08-09 Thread Subrat Mishra (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575807#comment-16575807
 ] 

Subrat Mishra edited comment on HBASE-21030 at 8/10/18 6:20 AM:


Hi Nihal,

I think it's a typo instead of "increment" it should be "append"

{code}
 @param append object that specifies the columns and amounts to be used for the 
append operations
{code}


 Thanks.


was (Author: subrat.mishra):
Hi Nihal,

I think it's a typo instead of "increment" it should be "append"
@param append object that specifies the columns and amounts to be used
   *  for the append operations
Thanks.

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Priority: Minor
>  Labels: beginner, beginners
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21030) Correct javadoc for append operation

2018-08-09 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575809#comment-16575809
 ] 

Nihal Jain commented on HBASE-21030:


{quote}I think it's a typo instead of "increment" it should be "append"
{quote}
Yes, but we also need to change the message as in case of append we do not have 
the concept of amount, but rather value to be appended. [~subrat.mishra] 

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Priority: Minor
>  Labels: beginner, beginners
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21030) Correct javadoc for append operation

2018-08-09 Thread Subrat Mishra (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575821#comment-16575821
 ] 

Subrat Mishra commented on HBASE-21030:
---

Thanks Nihal. Amount signifies quantity. Value makes more sense here.

{code}
@param append object that specifies the columns and values to be appended.
{code}

Is it fine now?

> Correct javadoc for append operation
> 
>
> Key: HBASE-21030
> URL: https://issues.apache.org/jira/browse/HBASE-21030
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 1.5.0
>Reporter: Nihal Jain
>Priority: Minor
>  Labels: beginner, beginners
>
> The doc for {{append}} operation is incorrect. (see {{@param append}} in the 
> code snippet below or 
> [Table.java#L566|https://github.com/apache/hbase/blob/3f5033f88ee9da2a5a42d058b9aefe57b089b3e1/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java#L566])
> {code:java}
>   /**
>* Appends values to one or more columns within a single row.
>* 
>* This operation guaranteed atomicity to readers. Appends are done
>* under a single row lock, so write operations to a row are synchronized, 
> and
>* readers are guaranteed to see this operation fully completed.
>*
>* @param append object that specifies the columns and amounts to be used
>*  for the increment operations
>* @throws IOException e
>* @return values of columns after the append operation (maybe null)
>*/
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20943) Add offline/online region count into metrics

2018-08-09 Thread huaxiang sun (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16575836#comment-16575836
 ] 

huaxiang sun commented on HBASE-20943:
--

hi [~jinghanx], I was about committing the change, and found that I could get 
your email, which I will put into Author field. Do you mind to send a preferred 
email address ? Thanks.

> Add offline/online region count into metrics
> 
>
> Key: HBASE-20943
> URL: https://issues.apache.org/jira/browse/HBASE-20943
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.0.0, 1.2.6.1
>Reporter: Tianying Chang
>Assignee: jinghan xu
>Priority: Minor
> Attachments: HBASE-20943-master-v1.patch, 
> HBASE-20943-master-v2.patch, HBASE-20943-master-v3.patch, HBASE-20943.patch, 
> Screen Shot 2018-07-25 at 2.51.19 PM.png
>
>
> We intensively use metrics to monitor the health of our HBase production 
> cluster. We have seen some regions of a table stuck and cannot be brought 
> online due to AWS issue which cause some log file corrupted. It will be good 
> if we can catch this early. Although WebUI has this information, it is not 
> useful for automated monitoring. By adding this metric, we can easily monitor 
> them with our monitoring system. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)