[jira] [Commented] (HBASE-21610) numOpenConnections metric is set to -1 when zero server channel exist

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724802#comment-16724802
 ] 

Hadoop QA commented on HBASE-21610:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}134m 52s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRecoveredEdits |
|   | hadoop.hbase.master.TestRestartCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952293/HBASE-21610.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d2bc71b64324 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / fb58a23e56 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15324/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Tes

[jira] [Commented] (HBASE-21565) Delete dead server from dead server list too early leads to concurrent Server Crash Procedures(SCP) for a same server

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724816#comment-16724816
 ] 

Hadoop QA commented on HBASE-21565:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}130m  8s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 52s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRecoveredEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-21565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952296/HBASE-21565.branch-2.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 269a25dc5148 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / d2832c1708 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15325/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15325/testReport/ |
| Max. process+thread count | 4479 (vs. ulimit of 1) |
| modules | C: hbase-serve

[jira] [Commented] (HBASE-21565) Delete dead server from dead server list too early leads to concurrent Server Crash Procedures(SCP) for a same server

2018-12-19 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724881#comment-16724881
 ] 

Jingyun Tian commented on HBASE-21565:
--

This failed test is not related to this patch. Pushed to branch-2.

> Delete dead server from dead server list too early leads to concurrent Server 
> Crash Procedures(SCP) for a same server
> -
>
> Key: HBASE-21565
> URL: https://issues.apache.org/jira/browse/HBASE-21565
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Attachments: HBASE-21565.branch-2.001.patch, 
> HBASE-21565.branch-2.002.patch, HBASE-21565.master.001.patch, 
> HBASE-21565.master.002.patch, HBASE-21565.master.003.patch, 
> HBASE-21565.master.004.patch, HBASE-21565.master.005.patch, 
> HBASE-21565.master.006.patch, HBASE-21565.master.007.patch, 
> HBASE-21565.master.008.patch, HBASE-21565.master.009.patch, 
> HBASE-21565.master.010.patch
>
>
> There are 2 kinds of SCP for a same server will be scheduled during cluster 
> restart, one is ZK session timeout, the other one is new server report in 
> will cause the stale one do fail over. The only barrier for these 2 kinds of 
> SCP is check if the server is in the dead server list.
> {code}
> if (this.deadservers.isDeadServer(serverName)) {
>   LOG.warn("Expiration called on {} but crash processing already in 
> progress", serverName);
>   return false;
> }
> {code}
> But the problem is when master finish initialization, it will delete all 
> stale servers from dead server list. Thus when the SCP for ZK session timeout 
> come in, the barrier is already removed.
> Here is the logs that how this problem occur.
> {code}
> 2018-12-07,11:42:37,589 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=9, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> 2018-12-07,11:42:58,007 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=444, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> {code}
> Now we can see two SCP are scheduled for the same server.
> But the first procedure is finished after the second SCP starts.
> {code}
> 2018-12-07,11:43:08,038 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=9, 
> state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false 
> in 30.5340sec
> {code}
> Thus it will leads the problem that regions will be assigned twice.
> {code}
> 2018-12-07,12:16:33,039 WARN 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager: rit=OPEN, 
> location=c4-hadoop-tst-st28.bj,29100,1544154149607, table=test_failover, 
> region=459b3130b40caf3b8f3e1421766f4089 reported OPEN on 
> server=c4-hadoop-tst-st29.bj,29100,1544154149615 but state has otherwise
> {code}
> And here we can see the server is removed from dead server list before the 
> second SCP starts.
> {code}
> 2018-12-07,11:42:44,938 DEBUG org.apache.hadoop.hbase.master.DeadServer: 
> Removed c4-hadoop-tst-st27.bj,29100,1544153846859 ; numProcessing=3
> {code}
> Thus we should not delete dead server from dead server list immediately.
> Patch to fix this problem will be upload later.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21588) Procedure v2 wal splitting implementation

2018-12-19 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21588:
-
Attachment: HBASE-21588.master.004.patch

> Procedure v2 wal splitting implementation
> -
>
> Key: HBASE-21588
> URL: https://issues.apache.org/jira/browse/HBASE-21588
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-21588.master.003.patch, 
> HBASE-21588.master.004.patch
>
>
> create a sub task to submit the implementation of procedure v2 wal splitting



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21592) quota.addGetResult(r) throw NPE

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724907#comment-16724907
 ] 

Hudson commented on HBASE-21592:


Results for branch branch-1
[build #599 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/599/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/599//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/599//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/599//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> quota.addGetResult(r)  throw  NPE
> -
>
> Key: HBASE-21592
> URL: https://issues.apache.org/jira/browse/HBASE-21592
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.2.10, 1.4.10, 2.1.3, 2.0.5
>
> Attachments: HBASE-21592.branch-1.0001.patch, 
> HBASE-21592.branch-2.0001.patch, HBASE-21592.master.0001.patch, 
> HBASE-21592.master.0002.patch, HBASE-21592.master.0003.patch, 
> HBASE-21592.master.0004.patch
>
>
> Setting the RPC quota, table.exists(Get) cause quota.addGetResult(r)  throw  
> NPE.
> {code:java}
> set_quota TYPE => THROTTLE, NAMESPACE => 'ns1', LIMIT => '1000req/sec'
> {code}
> {code:java}
> Connection conn = ConnectionFactory.createConnection(config);
> Table htable = conn.getTable(TableName.valueOf("ns1:t1"));
> boolean exists = htable.exists(new Get(Bytes.toBytes("123"))); {code}
> log:
> java.io.IOException: java.io.IOException
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.hbase.quotas.QuotaUtil.calculateResultSize(QuotaUtil.java:282)
>  at 
> org.apache.hadoop.hbase.quotas.DefaultOperationQuota.addGetResult(DefaultOperationQuota.java:99)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:1907)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32381)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2135)
>  ... 4 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21592) quota.addGetResult(r) throw NPE

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724924#comment-16724924
 ] 

Hudson commented on HBASE-21592:


Results for branch master
[build #671 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/671/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/671//console].




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> quota.addGetResult(r)  throw  NPE
> -
>
> Key: HBASE-21592
> URL: https://issues.apache.org/jira/browse/HBASE-21592
> Project: HBase
>  Issue Type: Bug
>Reporter: xuqinya
>Assignee: xuqinya
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.2.10, 1.4.10, 2.1.3, 2.0.5
>
> Attachments: HBASE-21592.branch-1.0001.patch, 
> HBASE-21592.branch-2.0001.patch, HBASE-21592.master.0001.patch, 
> HBASE-21592.master.0002.patch, HBASE-21592.master.0003.patch, 
> HBASE-21592.master.0004.patch
>
>
> Setting the RPC quota, table.exists(Get) cause quota.addGetResult(r)  throw  
> NPE.
> {code:java}
> set_quota TYPE => THROTTLE, NAMESPACE => 'ns1', LIMIT => '1000req/sec'
> {code}
> {code:java}
> Connection conn = ConnectionFactory.createConnection(config);
> Table htable = conn.getTable(TableName.valueOf("ns1:t1"));
> boolean exists = htable.exists(new Get(Bytes.toBytes("123"))); {code}
> log:
> java.io.IOException: java.io.IOException
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.lang.NullPointerException
>  at 
> org.apache.hadoop.hbase.quotas.QuotaUtil.calculateResultSize(QuotaUtil.java:282)
>  at 
> org.apache.hadoop.hbase.quotas.DefaultOperationQuota.addGetResult(DefaultOperationQuota.java:99)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:1907)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32381)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2135)
>  ... 4 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21535) Zombie Master detector is not working

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724926#comment-16724926
 ] 

Hudson commented on HBASE-21535:


Results for branch master
[build #671 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/671/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/671//console].




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Zombie Master detector is not working
> -
>
> Key: HBASE-21535
> URL: https://issues.apache.org/jira/browse/HBASE-21535
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21535.branch-2.patch, HBASE-21535.branch-2.patch, 
> HBASE-21535.patch, HBASE-21535.v2.patch
>
>
> We have InitializationMonitor thread in HMaster which detects Zombie Hmaster 
> based on _hbase.master.initializationmonitor.timeout _and halts if 
> _hbase.master.initializationmonitor.haltontimeout_ set _true_.
> After HBASE-19694, HMaster initialization order was correted. Hmaster is set 
> active after Initializing ZK system trackers as follows,
> {noformat}
>  status.setStatus("Initializing ZK system trackers");
>  initializeZKBasedSystemTrackers();
>  status.setStatus("Loading last flushed sequence id of regions");
>  try {
>  this.serverManager.loadLastFlushedSequenceIds();
>  } catch (IOException e) {
>  LOG.debug("Failed to load last flushed sequence id of regions"
>  + " from file system", e);
>  }
>  // Set ourselves as active Master now our claim has succeeded up in zk.
>  this.activeMaster = true;
> {noformat}
> But Zombie detector thread is started at the begining phase of 
> finishActiveMasterInitialization(),
> {noformat}
>  private void finishActiveMasterInitialization(MonitoredTask status) throws 
> IOException,
>  InterruptedException, KeeperException, ReplicationException {
>  Thread zombieDetector = new Thread(new InitializationMonitor(this),
>  "ActiveMasterInitializationMonitor-" + System.currentTimeMillis());
>  zombieDetector.setDaemon(true);
>  zombieDetector.start();
> {noformat}
> During zombieDetector execution "master.isActiveMaster()" will be false, so 
> it won't wait and cant detect zombie master.
> {noformat}
>  @Override
>  public void run() {
>  try {
>  while (!master.isStopped() && master.isActiveMaster()) {
>  Thread.sleep(timeout);
>  if (master.isInitialized()) {
>  LOG.debug("Initialization completed within allotted tolerance. Monitor 
> exiting.");
>  } else {
>  LOG.error("Master failed to complete initialization after " + timeout + "ms. 
> Please"
>  + " consider submitting a bug report including a thread dump of this 
> process.");
>  if (haltOnTimeout) {
>  LOG.error("Zombie Master exiting. Thread dump to stdout");
>  Threads.printThreadInfo(System.out, "Zombie HMaster");
>  System.exit(-1);
>  }
>  }
>  }
>  } catch (InterruptedException ie) {
>  LOG.trace("InitMonitor thread interrupted. Existing.");
>  }
>  }
>  }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21514) Refactor CacheConfig

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724927#comment-16724927
 ] 

Hudson commented on HBASE-21514:


Results for branch master
[build #671 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/671/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/671//console].




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Refactor CacheConfig
> 
>
> Key: HBASE-21514
> URL: https://issues.apache.org/jira/browse/HBASE-21514
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21514.branch-2.001.patch, 
> HBASE-21514.branch-2.002.patch, HBASE-21514.branch-2.003.patch, 
> HBASE-21514.master.001.patch, HBASE-21514.master.002.patch, 
> HBASE-21514.master.003.patch, HBASE-21514.master.004.patch, 
> HBASE-21514.master.005.patch, HBASE-21514.master.006.patch, 
> HBASE-21514.master.007.patch, HBASE-21514.master.008.patch, 
> HBASE-21514.master.009.patch, HBASE-21514.master.010.patch, 
> HBASE-21514.master.011.patch, HBASE-21514.master.011.patch, 
> HBASE-21514.master.012.patch, HBASE-21514.master.013.patch, 
> HBASE-21514.master.013.patch, HBASE-21514.master.014.patch, 
> HBASE-21514.master.addendum.patch
>
>
> # Add block cache and mob file cache to HRegionServer's member variable. One 
> rs has one block cache and one mob file cache.
>  # Move the global cache instances from CacheConfig to BlockCacheFactory. 
> Only keep config stuff in CacheConfig. And the CacheConfig still have a 
> reference to the RegionServer's block cache. Whether to cache a block need 
> block cache is present and the related config is true.
>  # Remove MobCacheCofnig. It only used for the global mob file cache 
> instance. After move the mob file cache to RegionServer. It is not used 
> anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21565) Delete dead server from dead server list too early leads to concurrent Server Crash Procedures(SCP) for a same server

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724925#comment-16724925
 ] 

Hudson commented on HBASE-21565:


Results for branch master
[build #671 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/671/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- Something went wrong running this stage, please [check relevant console 
output|https://builds.apache.org/job/HBase%20Nightly/job/master/671//console].




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/671//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Delete dead server from dead server list too early leads to concurrent Server 
> Crash Procedures(SCP) for a same server
> -
>
> Key: HBASE-21565
> URL: https://issues.apache.org/jira/browse/HBASE-21565
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Attachments: HBASE-21565.branch-2.001.patch, 
> HBASE-21565.branch-2.002.patch, HBASE-21565.master.001.patch, 
> HBASE-21565.master.002.patch, HBASE-21565.master.003.patch, 
> HBASE-21565.master.004.patch, HBASE-21565.master.005.patch, 
> HBASE-21565.master.006.patch, HBASE-21565.master.007.patch, 
> HBASE-21565.master.008.patch, HBASE-21565.master.009.patch, 
> HBASE-21565.master.010.patch
>
>
> There are 2 kinds of SCP for a same server will be scheduled during cluster 
> restart, one is ZK session timeout, the other one is new server report in 
> will cause the stale one do fail over. The only barrier for these 2 kinds of 
> SCP is check if the server is in the dead server list.
> {code}
> if (this.deadservers.isDeadServer(serverName)) {
>   LOG.warn("Expiration called on {} but crash processing already in 
> progress", serverName);
>   return false;
> }
> {code}
> But the problem is when master finish initialization, it will delete all 
> stale servers from dead server list. Thus when the SCP for ZK session timeout 
> come in, the barrier is already removed.
> Here is the logs that how this problem occur.
> {code}
> 2018-12-07,11:42:37,589 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=9, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> 2018-12-07,11:42:58,007 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=444, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> {code}
> Now we can see two SCP are scheduled for the same server.
> But the first procedure is finished after the second SCP starts.
> {code}
> 2018-12-07,11:43:08,038 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=9, 
> state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false 
> in 30.5340sec
> {code}
> Thus it will leads the problem that regions will be assigned twice.
> {code}
> 2018-12-07,12:16:33,039 WARN 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager: rit=OPEN, 
> location=c4-hadoop-tst-st28.bj,29100,1544154149607, table=test_failover, 
> region=459b3130b40caf3b8f3e1421766f4089 reported OPEN on 
> server=c4-hadoop-tst-st29.bj,29100,1544154149615 but state has otherwise
> {code}
> And here we can see the server is removed from dead server list before the 
> second SCP starts.
> {code}
> 2018-12-07,11:42:44,938 DEBUG org.apache.hadoop.hbase.master.DeadServer: 
> Removed c4-hadoop-tst-st27.bj,29100,1544153846859 ; numProcessing=3
> {code}
> Thus we should not delete dead server from dead server list immediately.
> Patch to fix this problem will be upload later.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21498) Master OOM when SplitTableRegionProcedure new CacheConfig and instantiate a new BlockCache

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724932#comment-16724932
 ] 

Hudson commented on HBASE-21498:


Results for branch branch-2.1
[build #696 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/696/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/696//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/696//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/696//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Master OOM when SplitTableRegionProcedure new CacheConfig and instantiate a 
> new BlockCache
> --
>
> Key: HBASE-21498
> URL: https://issues.apache.org/jira/browse/HBASE-21498
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21498.master.001.patch, 
> HBASE-21498.master.002.patch, HBASE-21498.master.003.patch, 
> HBASE-21498.master.004.patch, HBASE-21498.master.005.patch, 
> HBASE-21498.master.006.patch, HBASE-21498.master.006.patch, 
> HBASE-21498.master.007.patch, HBASE-21498.master.007.patch
>
>
> In our cluster, we use a small heap/offheap config for master. After 
> HBASE-21290, master doesn't instantiate BlockCache when it not carry table. 
> But it will new CacheConfig in SplitTableRegionProcedure.splitStoreFiles 
> method. And it will instantiate a new BlockCache if it not initialized before 
> and make master OOM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21498) Master OOM when SplitTableRegionProcedure new CacheConfig and instantiate a new BlockCache

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724917#comment-16724917
 ] 

Hudson commented on HBASE-21498:


Results for branch branch-2.0
[build #1178 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1178/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1178//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1178//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1178//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Master OOM when SplitTableRegionProcedure new CacheConfig and instantiate a 
> new BlockCache
> --
>
> Key: HBASE-21498
> URL: https://issues.apache.org/jira/browse/HBASE-21498
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2, 2.0.4
>
> Attachments: HBASE-21498.master.001.patch, 
> HBASE-21498.master.002.patch, HBASE-21498.master.003.patch, 
> HBASE-21498.master.004.patch, HBASE-21498.master.005.patch, 
> HBASE-21498.master.006.patch, HBASE-21498.master.006.patch, 
> HBASE-21498.master.007.patch, HBASE-21498.master.007.patch
>
>
> In our cluster, we use a small heap/offheap config for master. After 
> HBASE-21290, master doesn't instantiate BlockCache when it not carry table. 
> But it will new CacheConfig in SplitTableRegionProcedure.splitStoreFiles 
> method. And it will instantiate a new BlockCache if it not initialized before 
> and make master OOM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21514) Refactor CacheConfig

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724947#comment-16724947
 ] 

Hudson commented on HBASE-21514:


Results for branch branch-2
[build #1566 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1566/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1566//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1566//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1561//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Refactor CacheConfig
> 
>
> Key: HBASE-21514
> URL: https://issues.apache.org/jira/browse/HBASE-21514
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21514.branch-2.001.patch, 
> HBASE-21514.branch-2.002.patch, HBASE-21514.branch-2.003.patch, 
> HBASE-21514.master.001.patch, HBASE-21514.master.002.patch, 
> HBASE-21514.master.003.patch, HBASE-21514.master.004.patch, 
> HBASE-21514.master.005.patch, HBASE-21514.master.006.patch, 
> HBASE-21514.master.007.patch, HBASE-21514.master.008.patch, 
> HBASE-21514.master.009.patch, HBASE-21514.master.010.patch, 
> HBASE-21514.master.011.patch, HBASE-21514.master.011.patch, 
> HBASE-21514.master.012.patch, HBASE-21514.master.013.patch, 
> HBASE-21514.master.013.patch, HBASE-21514.master.014.patch, 
> HBASE-21514.master.addendum.patch
>
>
> # Add block cache and mob file cache to HRegionServer's member variable. One 
> rs has one block cache and one mob file cache.
>  # Move the global cache instances from CacheConfig to BlockCacheFactory. 
> Only keep config stuff in CacheConfig. And the CacheConfig still have a 
> reference to the RegionServer's block cache. Whether to cache a block need 
> block cache is present and the related config is true.
>  # Remove MobCacheCofnig. It only used for the global mob file cache 
> instance. After move the mob file cache to RegionServer. It is not used 
> anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21492) CellCodec Written To WAL Before It's Verified

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724946#comment-16724946
 ] 

Hudson commented on HBASE-21492:


Results for branch branch-2
[build #1566 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1566/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1566//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1566//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1561//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> CellCodec Written To WAL Before It's Verified
> -
>
> Key: HBASE-21492
> URL: https://issues.apache.org/jira/browse/HBASE-21492
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 1.2.7, 2.0.2
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 2.0.4, 1.4.10
>
> Attachments: HBASE-21492-branch-1.patch, HBASE-21492.1.patch, 
> HBASE-21492.2.patch, HBASE-21492.2.patch
>
>
> The cell codec class name is written into the WAL file, but the cell codec 
> class is not actually verified to exist.  Therefore, users can inadvertently 
> configure an invalid class name and it will be recorded into the WAL file.  
> At that point, the WAL file becomes unreadable and blocks processing of all 
> other WAL files.
> {code:java|title=AbstractProtobufLogWriter.java}
>   private WALHeader buildWALHeader0(Configuration conf, WALHeader.Builder 
> builder) {
> if (!builder.hasWriterClsName()) {
>   builder.setWriterClsName(getWriterClassName());
> }
> if (!builder.hasCellCodecClsName()) {
>   builder.setCellCodecClsName(WALCellCodec.getWALCellCodecClass(conf));
> }
> return builder.build();
>   }
> {code}
> https://github.com/apache/hbase/blob/025ddce868eb06b4072b5152c5ffae5a01e7ae30/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/AbstractProtobufLogWriter.java#L78-L86



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21596) HBase Shell "delete" command can cause older versions to be shown even if VERSIONS is configured as 1

2018-12-19 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724987#comment-16724987
 ] 

Wellington Chevreuil commented on HBASE-21596:
--

Sharing some thoughts I got while investigating this:

1) Real behaviour problem here is when VERSIONS => 1, because deleting a cell 
at specific TS would cause any previous version to be then available as long 
those are still in the memstore. It does not make sense to do so, if VERSIONS 
is set to 1, there should be no more than one version available. Behaviour is 
also inconsistent, as if memstore containing any previous versions has already 
been flush, then the given delete would cause no version to be available.

2) I believe the problem is not within Delete itself, it's working as expected, 
as we do want the ability to delete specific versions, when dealing with 
VERSIONS > 1. Maybe shell command description should be better written, as I 
have seen some users confused about the expected behaviour, mainly the sentence 
below from current hbase shell description:

{noformat}
When scanning, a delete cell suppresses older
versions. To delete a cell from  't1' at row 'r1' under column 'c1'
marked with the time 'ts1', do:
{noformat}

3) I think the ideal solution is to change behaviour from put operation, to 
actually cleanup or "delete mark" older versions for cells where number of 
versions is > then configured on family's VERSION attribute. I was thinking on 
following this direction, any thoughts/ideas are welcome.

> HBase Shell "delete" command can cause older versions to be shown even if 
> VERSIONS is configured as 1
> -
>
> Key: HBASE-21596
> URL: https://issues.apache.org/jira/browse/HBASE-21596
> Project: HBase
>  Issue Type: Bug
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
>
> HBase Shell delete command is supposed to operate over an specific TS. If no 
> TS is informed, it will assume the latest TS for the cell and put delete 
> marker for it. 
> However, for a CF with VERSIONS => 1, if multiple puts were performed for 
> same cell, there may be multiple cell versions on the memstore, so delete 
> would only be "delete marking" one of those, and causing the most recent no 
> marked one to be shown on gets/scans, which then contradicts the CF "VERSIONS 
> => 1" configuration.
> This issue is not seen with deleteall command or using Delete operation from 
> Java API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21048) Get LogLevel is not working from console in secure environment

2018-12-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21048:

Attachment: HBASE-21048.master.003.patch

> Get LogLevel is not working from console in secure environment
> --
>
> Key: HBASE-21048
> URL: https://issues.apache.org/jira/browse/HBASE-21048
> Project: HBase
>  Issue Type: Bug
>Reporter: Chandra Sekhar
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-21048.001.patch, HBASE-21048.master.001.patch, 
> HBASE-21048.master.002.patch, HBASE-21048.master.003.patch
>
>
> When we try to get log level of specific package in secure environment, 
> getting SocketException.
> {code:java}
> hbase/master/bin# ./hbase org.apache.hadoop.hbase.http.log.LogLevel -getlevel 
> host-:16010 org.apache.hadoop.hbase
> Connecting to http://host-:16010/logLevel?log=org.apache.hadoop.hbase
> java.net.SocketException: Unexpected end of file from server
> {code}
> It is trying to connect http instead of https 
> code snippet that handling only http in *LogLevel.java*
> {code:java}
>  public static void main(String[] args) {
> if (args.length == 3 && "-getlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]);
>   return;
> }
> else if (args.length == 4 && "-setlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]
>   + "&level=" + args[3]);
>   return;
> }
> System.err.println(USAGES);
> System.exit(-1);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Jermy Li (JIRA)
Jermy Li created HBASE-21618:


 Summary: Scan with the same startRow(inclusive=true) and 
stopRow(inclusive=false) returns one result
 Key: HBASE-21618
 URL: https://issues.apache.org/jira/browse/HBASE-21618
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.2
 Environment: hbase server 2.0.2
hbase client 2.0.0
Reporter: Jermy Li


I expect the following code to return none result, but still return a row:
{code:java}
byte[] rowkey = "some key existed";
Scan scan = new Scan();
scan.withStartRow(rowkey, true);
scan.withStopRow(rowkey, false);
htable.getScanner(scan);
{code}





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21048) Get LogLevel is not working from console in secure environment

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725008#comment-16725008
 ] 

Hadoop QA commented on HBASE-21048:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m  
7s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
7s{color} | {color:red} hbase-http in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  7s{color} 
| {color:red} hbase-http in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
7s{color} | {color:red} The patch fails to run checkstyle in hbase-http {color} 
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  0m  
8s{color} | {color:red} patch has 13 errors when building our shaded downstream 
artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  0m  
7s{color} | {color:red} The patch causes 13 errors with Hadoop v2.7.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  0m 
15s{color} | {color:red} The patch causes 13 errors with Hadoop v3.0.0. {color} 
|
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m  
6s{color} | {color:red} hbase-http in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m  
7s{color} | {color:red} hbase-http in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  7s{color} 
| {color:red} hbase-http in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21048 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952353/HBASE-21048.master.003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  findbugs  hbaseanti  checkstyle  |
| uname | Linux c2c012b4cae1 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HBASE-Build/

[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725036#comment-16725036
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #22 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/22/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/22//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/22//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/22//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14939) Document bulk loaded hfile replication

2018-12-19 Thread Ashish Singhi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725042#comment-16725042
 ] 

Ashish Singhi commented on HBASE-14939:
---

Thanks [~jojochuang]. The patch looks good to me.

> Document bulk loaded hfile replication
> --
>
> Key: HBASE-14939
> URL: https://issues.apache.org/jira/browse/HBASE-14939
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Ashish Singhi
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-14939.master.001.patch
>
>
> After HBASE-13153 is committed we need to add that information under the 
> Cluster Replication section in HBase book.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21610) numOpenConnections metric is set to -1 when zero server channel exist

2018-12-19 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21610:
--
Attachment: HBASE-21610.patch

> numOpenConnections metric is set to -1 when zero server channel exist
> -
>
> Key: HBASE-21610
> URL: https://issues.apache.org/jira/browse/HBASE-21610
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.1.1, 2.0.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21610.patch, HBASE-21610.patch, HBASE-21610.patch, 
> HBASE-21610.patch
>
>
> In NettyRpcServer, numOpenConnections metric is set to -1 when zero server 
> channel exist.
> {code}
> @Override
>  public int getNumOpenConnections() {
>  // allChannels also contains the server channel, so exclude that from the 
> count.
>  return allChannels.size() - 1;
>  }
> {code}
>  
>  We should not decrease the channel size by 1 when zero server channel exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20917) MetaTableMetrics#stop references uninitialized requestsMap for non-meta region

2018-12-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725057#comment-16725057
 ] 

Sean Busbey commented on HBASE-20917:
-

sorry for not updating. the 2.0.4 patch already landed, so you should be good 
to go. this is open still for the branch-2.1 backport, which is blocked on 
HBASE-19722. I think no rush on that one; can wait for the next 2.1 release.

> MetaTableMetrics#stop references uninitialized requestsMap for non-meta region
> --
>
> Key: HBASE-20917
> URL: https://issues.apache.org/jira/browse/HBASE-20917
> Project: HBase
>  Issue Type: Bug
>  Components: meta, metrics
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.4.6, 2.2.0, 2.0.4
>
> Attachments: 20917.addendum, 20917.v1.txt, 20917.v2.txt
>
>
> I noticed the following in test output:
> {code}
> 2018-07-21 15:54:43,181 ERROR [RS_CLOSE_REGION-regionserver/172.17.5.4:0-1] 
> executor.EventHandler(186): Caught throwable while processing event 
> M_RS_CLOSE_REGION
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.coprocessor.MetaTableMetrics.stop(MetaTableMetrics.java:329)
>   at 
> org.apache.hadoop.hbase.coprocessor.BaseEnvironment.shutdown(BaseEnvironment.java:91)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionEnvironment.shutdown(RegionCoprocessorHost.java:165)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.shutdown(CoprocessorHost.java:290)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$4.postEnvCall(RegionCoprocessorHost.java:559)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:622)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postClose(RegionCoprocessorHost.java:551)
>   at org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1678)
>   at org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1484)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.CloseRegionHandler.process(CloseRegionHandler.java:104)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> {code}
> {{requestsMap}} is only initialized for the meta region.
> However, check for meta region is absent in the stop method:
> {code}
>   public void stop(CoprocessorEnvironment e) throws IOException {
> // since meta region can move around, clear stale metrics when stop.
> for (String meterName : requestsMap.keySet()) {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21610) numOpenConnections metric is set to -1 when zero server channel exist

2018-12-19 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725046#comment-16725046
 ] 

stack commented on HBASE-21610:
---

Looks like TestRecoveredEdits depends on old behavior [~pankaj2461]? What you 
think sir? Retrying in meantime.

> numOpenConnections metric is set to -1 when zero server channel exist
> -
>
> Key: HBASE-21610
> URL: https://issues.apache.org/jira/browse/HBASE-21610
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.1.1, 2.0.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21610.patch, HBASE-21610.patch, HBASE-21610.patch, 
> HBASE-21610.patch
>
>
> In NettyRpcServer, numOpenConnections metric is set to -1 when zero server 
> channel exist.
> {code}
> @Override
>  public int getNumOpenConnections() {
>  // allChannels also contains the server channel, so exclude that from the 
> count.
>  return allChannels.size() - 1;
>  }
> {code}
>  
>  We should not decrease the channel size by 1 when zero server channel exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Meta query statistics metrics source

2018-12-19 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725058#comment-16725058
 ] 

Sean Busbey commented on HBASE-19722:
-

I'm currently planning to wait until after the current 2.1 RCs to land this 
backport, FYI. Mostly just because of the holiday. If someone needs it sooner 
and is willing to watch that post-commit goes fine, I have the backport ready 
to push.

> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, meta, metrics, Operability
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.4.6, 2.2.0, 2.0.2
>
> Attachments: HBASE-19722-branch-2.1.v1.patch, 
> HBASE-19722.branch-1.v001.patch, HBASE-19722.branch-1.v002.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname.
> Can be implemented as a coprocessor.
>  
>  
>  
>  
> ===
> *Release Note* (WIP)
> *1. Usage:*
> Use this coprocessor by adding below section to hbase-site.xml
> {{}}
> {{    hbase.coprocessor.region.classes}}
> {{    org.apache.hadoop.hbase.coprocessor.MetaTableMetrics}}
> {{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725061#comment-16725061
 ] 

Duo Zhang commented on HBASE-21618:
---

IIRC this is intentional to keep compatible with old code, where we consider 
the scan which has the same startKey and endKey as a get...

On a vacation so do not have cycles to read the code...

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Major
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21611) REGION_STATE_TRANSITION_CONFIRM_CLOSED should interact better with crash procedure

2018-12-19 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725066#comment-16725066
 ] 

Duo Zhang commented on HBASE-21611:
---

This is by design I'd say, we have to retry until the SCP interrupts us. 
Checking for SCP maybe possible but it will lead to more complicated logic, and 
also more possible races and bugs... And does it spam the logs? Maybe the 
problem is that the backoff logic is broken? Otherwise it will soon become 
seconds or even minutes interval.

> REGION_STATE_TRANSITION_CONFIRM_CLOSED should interact better with crash 
> procedure
> --
>
> Key: HBASE-21611
> URL: https://issues.apache.org/jira/browse/HBASE-21611
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Major
>
> 1) Not a bug per se, since HDFS is not supposed to lose files, just a bit 
> fragile.
> When a dead server's WAL directory is deleted (due to a manual intervention, 
> or some issue with HDFS) while some regions are in CLOSING state on that 
> server, they get stuck forever in REGION_STATE_TRANSITION_CONFIRM_CLOSED - 
> REGION_STATE_TRANSITION_CLOSE - "give up and mark the procedure as complete, 
> the parent procedure will take care of this" loop. There's no crash procedure 
> for the server so nobody ever takes care of that.
> 2) Under normal circumstances, when a large WAL is being split, this same 
> loop keeps spamming the logs and wasting resources for no reason, until the 
> crash procedure completes. There's no reason for it to retry - it should just 
> wait for crash procedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21614) RIT recovery with ServerCrashProcedure is broken in multiple ways

2018-12-19 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725075#comment-16725075
 ] 

Duo Zhang commented on HBASE-21614:
---

{quote}
That seems to be a problem #1 - it immediately gets regions to later recover, 
so in this case it gets nothing.
{quote}

Which version do you use? IIRC this has already been fixed. SCP will wait if 
meta is not loaded, unless it is in the SPLIT_META or ASSIGN_META state.

Here is the code, at the top of executeFromState method in SCP
{code}
switch (state) {
  case SERVER_CRASH_START:
  case SERVER_CRASH_SPLIT_META_LOGS:
  case SERVER_CRASH_ASSIGN_META:
break;
  default:
// If hbase:meta is not assigned, yield.
if (env.getAssignmentManager().waitMetaLoaded(this)) {
  throw new ProcedureSuspendedException();
}
}
{code}

> RIT recovery with ServerCrashProcedure is broken in multiple ways
> -
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Major
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded.
> That seems to be a problem #1 - it immediately gets regions to later recover, 
> so in this case it gets nothing.
> I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP.
> Seems like a race condition.
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JI

[jira] [Created] (HBASE-21619) Fix warning message caused by incorrect ternary operator evaluation

2018-12-19 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HBASE-21619:
---

 Summary: Fix warning message caused by incorrect ternary operator 
evaluation
 Key: HBASE-21619
 URL: https://issues.apache.org/jira/browse/HBASE-21619
 Project: HBase
  Issue Type: Bug
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


{code:title=LoadIncrementalHFiles#doBulkLoad}
LOG.warn(
  "Bulk load operation did not find any files to load in " + "directory 
" + hfofDir != null
  ? hfofDir.toUri().toString()
  : "" + ".  Does it contain files in " +
  "subdirectories that correspond to column family names?");
{code}
JDK complains {{"Bulk load operation did not find any files to load in " + 
"directory " + hfofDir != null}} is always true, which is not what is intended, 
and that produces a wrong message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21619) Fix warning message caused by incorrect ternary operator evaluation

2018-12-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21619:

Attachment: HBASE-21619.master.001.patch

> Fix warning message caused by incorrect ternary operator evaluation
> ---
>
> Key: HBASE-21619
> URL: https://issues.apache.org/jira/browse/HBASE-21619
> Project: HBase
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HBASE-21619.master.001.patch
>
>
> {code:title=LoadIncrementalHFiles#doBulkLoad}
> LOG.warn(
>   "Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null
>   ? hfofDir.toUri().toString()
>   : "" + ".  Does it contain files in " +
>   "subdirectories that correspond to column family names?");
> {code}
> JDK complains {{"Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null}} is always true, which is not what is 
> intended, and that produces a wrong message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21619) Fix warning message caused by incorrect ternary operator evaluation

2018-12-19 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21619:

Status: Patch Available  (was: Open)

Fixed the ternary operation and use slf4j parameterized logging for better 
readability.

> Fix warning message caused by incorrect ternary operator evaluation
> ---
>
> Key: HBASE-21619
> URL: https://issues.apache.org/jira/browse/HBASE-21619
> Project: HBase
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HBASE-21619.master.001.patch
>
>
> {code:title=LoadIncrementalHFiles#doBulkLoad}
> LOG.warn(
>   "Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null
>   ? hfofDir.toUri().toString()
>   : "" + ".  Does it contain files in " +
>   "subdirectories that correspond to column family names?");
> {code}
> JDK complains {{"Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null}} is always true, which is not what is 
> intended, and that produces a wrong message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21610) numOpenConnections metric is set to -1 when zero server channel exist

2018-12-19 Thread Pankaj Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725094#comment-16725094
 ] 

Pankaj Kumar commented on HBASE-21610:
--

Failures are irrelevant Sir. TestRecoveredEdits is passing locally and 
TestRestartCluster.testClusterRestartFailOver fails without the code fix also.

> numOpenConnections metric is set to -1 when zero server channel exist
> -
>
> Key: HBASE-21610
> URL: https://issues.apache.org/jira/browse/HBASE-21610
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.1.1, 2.0.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21610.patch, HBASE-21610.patch, HBASE-21610.patch, 
> HBASE-21610.patch
>
>
> In NettyRpcServer, numOpenConnections metric is set to -1 when zero server 
> channel exist.
> {code}
> @Override
>  public int getNumOpenConnections() {
>  // allChannels also contains the server channel, so exclude that from the 
> count.
>  return allChannels.size() - 1;
>  }
> {code}
>  
>  We should not decrease the channel size by 1 when zero server channel exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Meta query statistics metrics source

2018-12-19 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725133#comment-16725133
 ] 

stack commented on HBASE-19722:
---

Go for it. I made a mistake doing the RC and have to start over.

> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, meta, metrics, Operability
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.4.6, 2.2.0, 2.0.2
>
> Attachments: HBASE-19722-branch-2.1.v1.patch, 
> HBASE-19722.branch-1.v001.patch, HBASE-19722.branch-1.v002.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname.
> Can be implemented as a coprocessor.
>  
>  
>  
>  
> ===
> *Release Note* (WIP)
> *1. Usage:*
> Use this coprocessor by adding below section to hbase-site.xml
> {{}}
> {{    hbase.coprocessor.region.classes}}
> {{    org.apache.hadoop.hbase.coprocessor.MetaTableMetrics}}
> {{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2018-12-19 Thread Mohamed Mohideen Meeran (JIRA)
Mohamed Mohideen Meeran created HBASE-21620:
---

 Summary: Problem in scan query when using more than one column 
prefix filter in some cases.
 Key: HBASE-21620
 URL: https://issues.apache.org/jira/browse/HBASE-21620
 Project: HBase
  Issue Type: Bug
  Components: scan
Affects Versions: 1.4.8
 Environment: hbase-1.4.8, hbase-1.4.9

hadoop-2.7.3
Reporter: Mohamed Mohideen Meeran
 Attachments: HBaseImportData.java, file.txt

In some cases, unable to get the scan results when using more than one column 
prefix filter.

Attached a java file to import the data which we used and a text file 
containing the values..

While executing the following query (hbase shell as well as java program) it is 
waiting indefinitely and after RPC timeout we got the following error.. Also we 
noticed high cpu, high load average and very frequent young gc  in the region 
server containing this row...

scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
"ColumnPrefixFilter('1544770422942010001_') OR 
ColumnPrefixFilter('1544769883529010001_')"}

ROW                                                  COLUMN+CELL                
                                                   ERROR: Call id=18, 
waitTime=60005, rpcTimetout=6

 

Note: Table scan operation and scan with a single column prefix filter works 
fine in this case.

When we check the same query in hbase-1.2.5 it is working fine.

Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19722) Meta query statistics metrics source

2018-12-19 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725134#comment-16725134
 ] 

stack commented on HBASE-19722:
---

^[~busbey] See above. Thanks.

> Meta query statistics metrics source
> 
>
> Key: HBASE-19722
> URL: https://issues.apache.org/jira/browse/HBASE-19722
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, meta, metrics, Operability
>Reporter: Andrew Purtell
>Assignee: Xu Cang
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 1.4.6, 2.2.0, 2.0.2
>
> Attachments: HBASE-19722-branch-2.1.v1.patch, 
> HBASE-19722.branch-1.v001.patch, HBASE-19722.branch-1.v002.patch, 
> HBASE-19722.master.010.patch, HBASE-19722.master.011.patch, 
> HBASE-19722.master.012.patch, HBASE-19722.master.013.patch, 
> HBASE-19722.master.014.patch, HBASE-19722.master.015.patch, 
> HBASE-19722.master.016.patch
>
>
> Implement a meta query statistics metrics source, created whenever a 
> regionserver starts hosting meta, removed when meta hosting moves. Provide 
> views on top tables by request counts, top meta rowkeys by request count, top 
> clients making requests by their hostname.
> Can be implemented as a coprocessor.
>  
>  
>  
>  
> ===
> *Release Note* (WIP)
> *1. Usage:*
> Use this coprocessor by adding below section to hbase-site.xml
> {{}}
> {{    hbase.coprocessor.region.classes}}
> {{    org.apache.hadoop.hbase.coprocessor.MetaTableMetrics}}
> {{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21588) Procedure v2 wal splitting implementation

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725157#comment-16725157
 ] 

Hadoop QA commented on HBASE-21588:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
11s{color} | {color:red} hbase-server: The patch generated 6 new + 293 
unchanged - 0 fixed = 299 total (was 293) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
42s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
26s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}269m 19s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}330m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestServerCrashProcedure |
|   | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.Tes

[jira] [Updated] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-21621:
---
Attachment: HBASE-21621.master.UT.patch

> Reversed scan does not return expected  number of rows
> --
>
> Key: HBASE-21621
> URL: https://issues.apache.org/jira/browse/HBASE-21621
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Nihal Jain
>Priority: Critical
> Attachments: HBASE-21621.master.UT.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-21621:
---
Status: Patch Available  (was: Open)

> Reversed scan does not return expected  number of rows
> --
>
> Key: HBASE-21621
> URL: https://issues.apache.org/jira/browse/HBASE-21621
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 2.1.1, 3.0.0
>Reporter: Nihal Jain
>Priority: Critical
> Attachments: HBASE-21621.master.UT.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Nihal Jain (JIRA)
Nihal Jain created HBASE-21621:
--

 Summary: Reversed scan does not return expected  number of rows
 Key: HBASE-21621
 URL: https://issues.apache.org/jira/browse/HBASE-21621
 Project: HBase
  Issue Type: Bug
  Components: scan
Affects Versions: 2.1.1, 3.0.0
Reporter: Nihal Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Nihal Jain (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nihal Jain updated HBASE-21621:
---
Description: 
*Steps to reproduce*
 # Create a table and put some data into it (data should be big enough, say N 
rows)
 # Flush the table
 # Scan the table with reversed set to true

*Expected Result*
N rows should be retrieved in reversed order

*Actual Result*
Less than expected number of rows is retrieved with following error in logs

{noformat}
2018-12-19 21:55:32,944 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
regionserver.StoreScanner(1000): Switch to stream read (scanned=262214 bytes) 
of cf
2018-12-19 21:55:32,955 ERROR 
[RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] ipc.RpcServer(471): 
Unexpected throwable object 
java.lang.AssertionError: Key 
\x00\x00\x00\x00\x00\x00\x00\x09/cf:a/1545236714675/Put/vlen=131072/seqid=4 
followed by a error order key 
\x00\x00\x00\x00\x00\x00\x00\x0F/cf:a/1545236715545/Put/vlen=131072/seqid=8 in 
cf cf in reversed scan
at 
org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.checkScanOrder(ReversedStoreScanner.java:105)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:568)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6598)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6762)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6535)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3252)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3501)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
2018-12-19 21:55:32,955 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] ipc.CallRunner(142): 
callId: 508 service: ClientService methodName: Scan size: 47 connection: 
127.0.0.1:48328 deadline: 1545236792955, exception=java.io.IOException: Key 
\x00\x00\x00\x00\x00\x00\x00\x09/cf:a/1545236714675/Put/vlen=131072/seqid=4 
followed by a error order key 
\x00\x00\x00\x00\x00\x00\x00\x0F/cf:a/1545236715545/Put/vlen=131072/seqid=8 in 
cf cf in reversed scan
2018-12-19 21:55:33,060 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] ipc.CallRunner(142): 
callId: 511 service: ClientService methodName: Scan size: 47 connection: 
127.0.0.1:48328 deadline: 1545236792955, 
exception=org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; 
request=scanner_id: 2421102592655360183 number_of_rows: 2147483647 
close_scanner: false next_call_seq: 0 client_handles_partials: true 
client_handles_heartbeats: true track_scan_metrics: false renew: false
2018-12-19 21:55:33,060 DEBUG [Time-limited test] 
client.ScannerCallableWithReplicas(200): Scan with primary region returns 
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
2421102592655360183 number_of_rows: 2147483647 close_scanner: false 
next_call_seq: 0 client_handles_partials: true client_handles_heartbeats: true 
track_scan_metrics: false renew: false
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.checkScanNextCallSeq(RSRpcServices.java:3122)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3455)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
{noformat}


*Analysis/Issue*
>From initial analysis it seems problem occurs when we switch read type 


> Reversed scan does not return expected  number of rows
> --
>
> Key: HBASE-21621
> URL: https://issues.apache.org/jira/browse/HBASE-21621
> Project: HBase
>  Issue Type: Bug
>   

[jira] [Commented] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725166#comment-16725166
 ] 

Nihal Jain commented on HBASE-21621:


Attached an UT patch to reproduce the same.

Note: The UT does not fail if we explicitly specify readtype ({{STREAM}} or 
{{PREAD}}).

> Reversed scan does not return expected  number of rows
> --
>
> Key: HBASE-21621
> URL: https://issues.apache.org/jira/browse/HBASE-21621
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Nihal Jain
>Priority: Critical
> Attachments: HBASE-21621.master.UT.patch
>
>
> *Steps to reproduce*
>  # Create a table and put some data into it (data should be big enough, say N 
> rows)
>  # Flush the table
>  # Scan the table with reversed set to true
> *Expected Result*
> N rows should be retrieved in reversed order
> *Actual Result*
> Less than expected number of rows is retrieved with following error in logs
> {noformat}
> 2018-12-19 21:55:32,944 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> regionserver.StoreScanner(1000): Switch to stream read (scanned=262214 bytes) 
> of cf
> 2018-12-19 21:55:32,955 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> ipc.RpcServer(471): Unexpected throwable object 
> java.lang.AssertionError: Key 
> \x00\x00\x00\x00\x00\x00\x00\x09/cf:a/1545236714675/Put/vlen=131072/seqid=4 
> followed by a error order key 
> \x00\x00\x00\x00\x00\x00\x00\x0F/cf:a/1545236715545/Put/vlen=131072/seqid=8 
> in cf cf in reversed scan
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.checkScanOrder(ReversedStoreScanner.java:105)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:568)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6598)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6762)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6535)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3252)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3501)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> 2018-12-19 21:55:32,955 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> ipc.CallRunner(142): callId: 508 service: ClientService methodName: Scan 
> size: 47 connection: 127.0.0.1:48328 deadline: 1545236792955, 
> exception=java.io.IOException: Key 
> \x00\x00\x00\x00\x00\x00\x00\x09/cf:a/1545236714675/Put/vlen=131072/seqid=4 
> followed by a error order key 
> \x00\x00\x00\x00\x00\x00\x00\x0F/cf:a/1545236715545/Put/vlen=131072/seqid=8 
> in cf cf in reversed scan
> 2018-12-19 21:55:33,060 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> ipc.CallRunner(142): callId: 511 service: ClientService methodName: Scan 
> size: 47 connection: 127.0.0.1:48328 deadline: 1545236792955, 
> exception=org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; 
> request=scanner_id: 2421102592655360183 number_of_rows: 2147483647 
> close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true track_scan_metrics: false renew: false
> 2018-12-19 21:55:33,060 DEBUG [Time-limited test] 
> client.ScannerCallableWithReplicas(200): Scan with primary region returns 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 2421102592655360183 number_of_rows: 2147483647 close_scanner: false 
> next_call_seq: 0 client_handles_partials: true client_handles_heartbeats: 
> true track_scan_metrics: false renew: false
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkScanNextCallSeq(RSRpcServices.java:3122)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3455)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientSe

[jira] [Commented] (HBASE-21616) Port HBASE-21034 (Add new throttle type: read/write capacity unit) to branch-1

2018-12-19 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725208#comment-16725208
 ] 

Andrew Purtell commented on HBASE-21616:


Thanks [~openinx]. I linked both issues to this one.

> Port HBASE-21034 (Add new throttle type: read/write capacity unit) to branch-1
> --
>
> Key: HBASE-21616
> URL: https://issues.apache.org/jira/browse/HBASE-21616
> Project: HBase
>  Issue Type: Task
>Reporter: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
>
> Port HBASE-21034 (Add new throttle type: read/write capacity unit) to branch-1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Pankaj Birat (JIRA)
Pankaj Birat created HBASE-21622:


 Summary: Getting old value for cell after TTL expiry of latest 
value cell value set with TTL.
 Key: HBASE-21622
 URL: https://issues.apache.org/jira/browse/HBASE-21622
 Project: HBase
  Issue Type: Bug
Reporter: Pankaj Birat
 Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
2018-12-19 at 11.20.39 PM.png

I am using HBase version 1.2.7

Getting old value for cell after TTL expiry of latest value cell value set with 
TTL.

Table:

COLUMN FAMILIES DESCRIPTION

{NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', 
COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => 
'65536', REPLICATION_SCOPE => '1'}

 

First time I am putting cell value without TTL.

Eg.

put 'sepTest', '1', 'data:value', 'one'

Now for same key I am putting value with TTL

put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}

Till expiry time (10)
I am getting value : 'updated_one' for key '1'

After expiry of TTL I am getting old valu !Screenshot 2018-12-19 at 11.18.52 
PM.png! e 'one'

 

Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Pankaj Birat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Birat updated HBASE-21622:
-
Description: 
I am using HBase version 1.2.7

Getting old value for cell after TTL expiry of latest value cell value set with 
TTL.

Table:

COLUMN FAMILIES DESCRIPTION

{NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', 
COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => 
'65536', REPLICATION_SCOPE => '1'}

 

First time I am putting cell value without TTL.

Eg.

*put 'sepTest', '1', 'data:value', 'one'*

Now for same key I am putting value with TTL

*put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*

Till expiry time (10)
 I am getting value : 'updated_one' for key '1'

After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
PM.png! e 'one'

 

Attaching screenshot for reference

  was:
I am using HBase version 1.2.7

Getting old value for cell after TTL expiry of latest value cell value set with 
TTL.

Table:

COLUMN FAMILIES DESCRIPTION

{NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', 
COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => 
'65536', REPLICATION_SCOPE => '1'}

 

First time I am putting cell value without TTL.

Eg.

put 'sepTest', '1', 'data:value', 'one'

Now for same key I am putting value with TTL

put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}

Till expiry time (10)
I am getting value : 'updated_one' for key '1'

After expiry of TTL I am getting old valu !Screenshot 2018-12-19 at 11.18.52 
PM.png! e 'one'

 

Attaching screenshot for reference


> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Major
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Pankaj Birat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Birat updated HBASE-21622:
-
Priority: Trivial  (was: Major)

> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Trivial
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725239#comment-16725239
 ] 

Wellington Chevreuil commented on HBASE-21622:
--

This problem seems related to HBASE-21596, where even with VERSIONS set to 1, 
multiple versions are kept in the memstore. So if a delete for a specific 
version happens when there are multiple versions still in the memstore, the 
newest version is then showed. If you trigger a flush command for "sepTest" 
between the two inserts, after TTL expiration, there should be no value. 

> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Trivial
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21610) numOpenConnections metric is set to -1 when zero server channel exist

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725242#comment-16725242
 ] 

Hadoop QA commented on HBASE-21610:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
1s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 6s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}163m 56s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}204m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestRestartCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21610 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 10e30676f955 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15328/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15328/testReport/ |
| Max. process+thread count | 4163 (vs. ulimit of 1) |
| modules

[jira] [Comment Edited] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Pankaj Birat (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725243#comment-16725243
 ] 

Pankaj Birat edited comment on HBASE-21622 at 12/19/18 6:15 PM:


[~wchevreuil]

there would be multiple versions in the memstore if the two/more writes came 
before memstore flush.

But in my case there was around 12hrs of difference between these two put 
calls, first without TTL second with TTL.


was (Author: pkbmh):
[~wchevreuil]

there would be multiple versions in the memstore if the two/more writes came 
before flush.

But in my case there was around 12hrs of difference between these two put 
calls, first without TTL second with TTL.

> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Trivial
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Pankaj Birat (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725243#comment-16725243
 ] 

Pankaj Birat commented on HBASE-21622:
--

[~wchevreuil]

there would be multiple versions in the memstore if the two/more writes came 
before flush.

But in my case there was around 12hrs of difference between these two put 
calls, first without TTL second with TTL.

> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Trivial
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21619) Fix warning message caused by incorrect ternary operator evaluation

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725246#comment-16725246
 ] 

Hadoop QA commented on HBASE-21619:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}134m 
15s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21619 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952363/HBASE-21619.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 5734dc2503aa 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15329/testReport/ |
| Max. process+thread count | 4809 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE

[jira] [Comment Edited] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Pankaj Birat (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725243#comment-16725243
 ] 

Pankaj Birat edited comment on HBASE-21622 at 12/19/18 6:21 PM:


[~wchevreuil]

there would be multiple versions in the memstore if the two/more writes came 
before memstore flush.

But in my case there was around 12hrs of difference between these two put 
calls, first without TTL and second with TTL.


was (Author: pkbmh):
[~wchevreuil]

there would be multiple versions in the memstore if the two/more writes came 
before memstore flush.

But in my case there was around 12hrs of difference between these two put 
calls, first without TTL second with TTL.

> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Trivial
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21622) Getting old value for cell after TTL expiry of latest value cell value set with TTL.

2018-12-19 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725259#comment-16725259
 ] 

Wellington Chevreuil commented on HBASE-21622:
--

[~pkbmh], yeah, actually the flush must had occurred within the two puts on the 
memstore, but before TTL expiration for this to work as it should. I think it's 
still related to HBASE-21596, as the main problem is actually honouring the 
VERSIONS value prop

> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> 
>
> Key: HBASE-21622
> URL: https://issues.apache.org/jira/browse/HBASE-21622
> Project: HBase
>  Issue Type: Bug
>Reporter: Pankaj Birat
>Priority: Trivial
> Attachments: Screenshot 2018-12-19 at 11.18.52 PM.png, Screenshot 
> 2018-12-19 at 11.20.39 PM.png
>
>
> I am using HBase version 1.2.7
> Getting old value for cell after TTL expiry of latest value cell value set 
> with TTL.
> Table:
> COLUMN FAMILIES DESCRIPTION
> {NAME => 'data', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', 
> KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '65536', REPLICATION_SCOPE => '1'}
>  
> First time I am putting cell value without TTL.
> Eg.
> *put 'sepTest', '1', 'data:value', 'one'*
> Now for same key I am putting value with TTL
> *put 'sepTest', '1', 'data:value', 'updated_one', \{TTL => 10}*
> Till expiry time (10)
>  I am getting value : 'updated_one' for key '1'
> After expiry of TTL I am getting old valu  !Screenshot 2018-12-19 at 11.18.52 
> PM.png! e 'one'
>  
> Attaching screenshot for reference



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21611) REGION_STATE_TRANSITION_CONFIRM_CLOSED should interact better with crash procedure

2018-12-19 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725278#comment-16725278
 ] 

Sergey Shelukhin commented on HBASE-21611:
--

Well, there are 10s or 100s of regions in this state, so retries even at a 
maximum interval of 10 minutes log 5-8 lines every few seconds on average, and 
more before they get to 10 minute wait.
Looking at how SCP already checks for RIT procedure, I wonder if it should 
instead replace the RIT procedure in the beginning, and make it a dependency, 
instead of checking for it in the end. Not sure if ProcedureV2 would allow 
making it a dependency retroactively. Then RIT itself could avoid waiting 
forever because it expects SCP to take over; so if there's no SCP it's some 
sort of a bug. 


> REGION_STATE_TRANSITION_CONFIRM_CLOSED should interact better with crash 
> procedure
> --
>
> Key: HBASE-21611
> URL: https://issues.apache.org/jira/browse/HBASE-21611
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Major
>
> 1) Not a bug per se, since HDFS is not supposed to lose files, just a bit 
> fragile.
> When a dead server's WAL directory is deleted (due to a manual intervention, 
> or some issue with HDFS) while some regions are in CLOSING state on that 
> server, they get stuck forever in REGION_STATE_TRANSITION_CONFIRM_CLOSED - 
> REGION_STATE_TRANSITION_CLOSE - "give up and mark the procedure as complete, 
> the parent procedure will take care of this" loop. There's no crash procedure 
> for the server so nobody ever takes care of that.
> 2) Under normal circumstances, when a large WAL is being split, this same 
> loop keeps spamming the logs and wasting resources for no reason, until the 
> crash procedure completes. There's no reason for it to retry - it should just 
> wait for crash procedure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21614) RIT recovery with ServerCrashProcedure is broken in multiple ways

2018-12-19 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725282#comment-16725282
 ] 

Sergey Shelukhin commented on HBASE-21614:
--

I see; then it must not be a problem. The problem is that regions are in 
CLOSING state, so they don't get added to the AM and SCP doesn't pick them up.

> RIT recovery with ServerCrashProcedure is broken in multiple ways
> -
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Major
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded.
> That seems to be a problem #1 - it immediately gets regions to later recover, 
> so in this case it gets nothing.
> I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP.
> Seems like a race condition.
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21614) RIT recovery with ServerCrashProcedure is broken in multiple ways

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HBASE-21614:


Assignee: Sergey Shelukhin

> RIT recovery with ServerCrashProcedure is broken in multiple ways
> -
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded.
> That seems to be a problem #1 - it immediately gets regions to later recover, 
> so in this case it gets nothing.
> I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP.
> Seems like a race condition.
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21614) RIT recovery with ServerCrashProcedure is broken in multiple ways

2018-12-19 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725282#comment-16725282
 ] 

Sergey Shelukhin edited comment on HBASE-21614 at 12/19/18 7:17 PM:


I see; then it must not be a problem. The problem is that regions are in 
CLOSING state, so they don't get added to the AM and SCP doesn't pick them up.



was (Author: sershe):
I see; then it must not be a problem. The problem is that regions are in 
CLOSING state, so they don't get added to the AM and SCP doesn't pick them up.

> RIT recovery with ServerCrashProcedure is broken in multiple ways
> -
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Priority: Major
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded.
> That seems to be a problem #1 - it immediately gets regions to later recover, 
> so in this case it gets nothing.
> I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP.
> Seems like a race condition.
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Summary: RIT recovery with ServerCrashProcedure doesn't account for all 
regions  (was: RIT recovery with ServerCrashProcedure is broken in multiple 
ways)

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded.
> That seems to be a problem #1 - it immediately gets regions to later recover, 
> so in this case it gets nothing.
> I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP.
> Seems like a race condition.
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Description: 
Master is restarting after a previous master crashed while recovering some 
regions from a dead server.

Master recovers RIT for the region, however the RIT has no location (logged, at 
least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
where? But that should be covered by meta, so not a big deal, right. As such it 
doesn't seem to add the region to server map anywhere
{noformat}
2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
assignment.AssignmentManager: Attach pid=38015, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
{noformat}

-However, in this case ServerCrashProcedure for the server kicks off BEFORE 
meta is loaded-.
-That seems to be a problem #1 - it immediately gets regions to later recover, 
so in this case it gets nothing-.
-I've grepped our logs for successful cases of SCP interacting with region 
transition at startup, and in all cases the meta was loaded before SCP-.
-Seems like a race condition-. Looks like SCP handles it
{noformat}
2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
,17000,1545087053243
2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
assignment.AssignmentManager: Added server1,17020,1544636616174 to dead servers 
which carryingMeta=false, submitted ServerCrashProcedure pid=111298
2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
meta=false
{noformat}
Meta is only loaded 11-12 seconds later.
If one looks at meta-loading code however, there is one more problem #2 - the 
region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
called - it's only called for OPENED regions. 
Expanding on the above, I've only seen SCP unblock stuck region transition at 
startup when region started out in meta as OPEN.
{noformat}
2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
assignment.RegionStateStore: Load hbase:meta entry region=region1, 
regionState=CLOSING, lastHost=server1,17020,1544636616174, 
regionLocation=server1,17020,1544636616174, openSeqNum=629131
{noformat}
SCP predictably finishes without doing anything; no other logs for this pid
{noformat}
2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
{noformat}
After that, region is still stuck trying to be closed in 
TransitRegionStateProcedure; it's in the same state for hours including across 
master restarts.
{noformat}
2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; rit=CLOSING, 
location=server1,17020,1544636616174; waiting on rectified condition fixed by 
other Procedure or operator intervention
{noformat}



  was:
Master is restarting after a previous master crashed while recovering some 
regions from a dead server.

Master recovers RIT for the region, however the RIT has no location (logged, at 
least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
where? But that should be covered by meta, so not a big deal, right. As such it 
doesn't seem to add the region to server map anywhere
{noformat}
2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
assignment.AssignmentManager: Attach pid=38015, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
{noformat}

-However, in this case ServerCrashProcedure for the server kicks off BEFORE 
meta is loaded.
That seems to be a problem #1 - it immediately gets regions to later recover, 
so in this case it gets nothing.
I've grepped our logs for successful cases of SCP interacting with region 
transition at startup, and in all cases the meta was loaded before SCP.
Seems like a race condition-. Looks like SCP handles it
{noformat}
2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMas

[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Description: 
Master is restarting after a previous master crashed while recovering some 
regions from a dead server.

Master recovers RIT for the region, however the RIT has no location (logged, at 
least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
where? But that should be covered by meta, so not a big deal, right. As such it 
doesn't seem to add the region to server map anywhere
{noformat}
2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
assignment.AssignmentManager: Attach pid=38015, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
{noformat}

-However, in this case ServerCrashProcedure for the server kicks off BEFORE 
meta is loaded.
That seems to be a problem #1 - it immediately gets regions to later recover, 
so in this case it gets nothing.
I've grepped our logs for successful cases of SCP interacting with region 
transition at startup, and in all cases the meta was loaded before SCP.
Seems like a race condition-. Looks like SCP handles it
{noformat}
2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
,17000,1545087053243
2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
assignment.AssignmentManager: Added server1,17020,1544636616174 to dead servers 
which carryingMeta=false, submitted ServerCrashProcedure pid=111298
2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
meta=false
{noformat}
Meta is only loaded 11-12 seconds later.
If one looks at meta-loading code however, there is one more problem #2 - the 
region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
called - it's only called for OPENED regions. 
Expanding on the above, I've only seen SCP unblock stuck region transition at 
startup when region started out in meta as OPEN.
{noformat}
2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
assignment.RegionStateStore: Load hbase:meta entry region=region1, 
regionState=CLOSING, lastHost=server1,17020,1544636616174, 
regionLocation=server1,17020,1544636616174, openSeqNum=629131
{noformat}
SCP predictably finishes without doing anything; no other logs for this pid
{noformat}
2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
{noformat}
After that, region is still stuck trying to be closed in 
TransitRegionStateProcedure; it's in the same state for hours including across 
master restarts.
{noformat}
2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; rit=CLOSING, 
location=server1,17020,1544636616174; waiting on rectified condition fixed by 
other Procedure or operator intervention
{noformat}



  was:
Master is restarting after a previous master crashed while recovering some 
regions from a dead server.

Master recovers RIT for the region, however the RIT has no location (logged, at 
least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
where? But that should be covered by meta, so not a big deal, right. As such it 
doesn't seem to add the region to server map anywhere
{noformat}
2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
assignment.AssignmentManager: Attach pid=38015, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
{noformat}

However, in this case ServerCrashProcedure for the server kicks off BEFORE meta 
is loaded.
That seems to be a problem #1 - it immediately gets regions to later recover, 
so in this case it gets nothing.
I've grepped our logs for successful cases of SCP interacting with region 
transition at startup, and in all cases the meta was loaded before SCP.
Seems like a race condition.
{noformat}
2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
master.RegionServerTracker: 

[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Attachment: HBASE-21614.master.001.patch

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Status: Patch Available  (was: Open)

A small patch

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725306#comment-16725306
 ] 

Sergey Shelukhin edited comment on HBASE-21614 at 12/19/18 7:41 PM:


A small patch. Let's see if tests pass... I actually checked and it looks like 
in normal operation, CLOSING and OPENING regions are added to server already; 
so it should be ok to add them from meta.


was (Author: sershe):
A small patch

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21577) do not close regions when RS is dying due to a broken WAL

2018-12-19 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725308#comment-16725308
 ] 

Sergey Shelukhin commented on HBASE-21577:
--

la la la... ;)

> do not close regions when RS is dying due to a broken WAL
> -
>
> Key: HBASE-21577
> URL: https://issues.apache.org/jira/browse/HBASE-21577
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21577.master.001.patch, 
> HBASE-21577.master.002.patch
>
>
> See HBASE-21576. DroppedSnapshot can be an FS failure; also, when WAL is 
> broken, some regions whose flushes are already in flight keep retrying, 
> resulting in minutes-long shutdown times. Since WAL will be replayed anyway 
> flushing regions doesn't provide much benefit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21225) Having RPC & Space quota on a table/Namespace doesn't allow space quota to be removed using 'NONE'

2018-12-19 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725319#comment-16725319
 ] 

Sakthi commented on HBASE-21225:


The failed UTs don't look related.  [~elserj] & [~nihaljain.cs] do you guys 
mind reviewing the patch?

> Having RPC & Space quota on a table/Namespace doesn't allow space quota to be 
> removed using 'NONE'
> --
>
> Key: HBASE-21225
> URL: https://issues.apache.org/jira/browse/HBASE-21225
> Project: HBase
>  Issue Type: Bug
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Major
> Attachments: hbase-21225.master.001.patch, 
> hbase-21225.master.002.patch, hbase-21225.master.003.patch
>
>
> A part of HBASE-20705 is still unresolved. In that Jira it was assumed that 
> problem is: when table having both rpc & space quotas is dropped (with 
> hbase.quota.remove.on.table.delete set as true), the rpc quota is not set to 
> be dropped along with table-drops, and space quota was not being able to be 
> removed completely because of the "EMPTY" row that rpc quota left even after 
> removing. 
> The proposed solution for that was to make sure that rpc quota didn't leave 
> empty rows after removal of quota. And setting automatic removal of rpc quota 
> with table drops. That made sure that space quotas can be recreated/removed.
> But all this was under the assumption that hbase.quota.remove.on.table.delete 
> is set as true. When it is set as false, the same issue can reproduced. Also 
> the below shown steps can used to reproduce the issue without table-drops.
> {noformat}
> hbase(main):005:0> create 't2','cf'
> Created table t2
> Took 0.7619 seconds
> => Hbase::Table - t2
> hbase(main):006:0> set_quota TYPE => THROTTLE, TABLE => 't2', LIMIT => 
> '10M/sec'
> Took 0.0514 seconds
> hbase(main):007:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0162 seconds
> hbase(main):008:0> list_quotas
> OWNER  QUOTAS
>  TABLE => t2   TYPE => THROTTLE, THROTTLE_TYPE => REQUEST_SIZE, 
> LIMIT => 10M/sec, SCOPE =>
>MACHINE
>  TABLE => t2   TYPE => SPACE, TABLE => t2, LIMIT => 1073741824, 
> VIOLATION_POLICY => NO_WRIT
>ES
> 2 row(s)
> Took 0.0716 seconds
> hbase(main):009:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => NONE
> Took 0.0082 seconds
> hbase(main):010:0> list_quotas
> OWNER   QUOTAS
>  TABLE => t2TYPE => THROTTLE, THROTTLE_TYPE => 
> REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE
>  TABLE => t2TYPE => SPACE, TABLE => t2, REMOVE => true
> 2 row(s)
> Took 0.0254 seconds
> hbase(main):011:0> set_quota TYPE => SPACE, TABLE => 't2', LIMIT => '1G', 
> POLICY => NO_WRITES
> Took 0.0082 seconds
> hbase(main):012:0> list_quotas
> OWNER   QUOTAS
>  TABLE => t2TYPE => THROTTLE, THROTTLE_TYPE => 
> REQUEST_SIZE, LIMIT => 10M/sec, SCOPE => MACHINE
>  TABLE => t2TYPE => SPACE, TABLE => t2, REMOVE => true
> 2 row(s)
> Took 0.0411 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21619) Fix warning message caused by incorrect ternary operator evaluation

2018-12-19 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725343#comment-16725343
 ] 

Wei-Chiu Chuang commented on HBASE-21619:
-

[~yuzhih...@gmail.com] is this something you could review for me? Looks like 
the change was introduced in HBASE-16646.

> Fix warning message caused by incorrect ternary operator evaluation
> ---
>
> Key: HBASE-21619
> URL: https://issues.apache.org/jira/browse/HBASE-21619
> Project: HBase
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HBASE-21619.master.001.patch
>
>
> {code:title=LoadIncrementalHFiles#doBulkLoad}
> LOG.warn(
>   "Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null
>   ? hfofDir.toUri().toString()
>   : "" + ".  Does it contain files in " +
>   "subdirectories that correspond to column family names?");
> {code}
> JDK complains {{"Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null}} is always true, which is not what is 
> intended, and that produces a wrong message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725341#comment-16725341
 ] 

Hadoop QA commented on HBASE-21621:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
49s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}242m 40s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}278m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.master.procedure.TestServerCrashProcedureWithReplicas |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestFromClientSide |
|   | hadoop.hbase.regionserver.TestJoinedScanners |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21621 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952377/HBASE-21621.master.UT.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3eb8b0d2eb1d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edde

[jira] [Commented] (HBASE-21619) Fix warning message caused by incorrect ternary operator evaluation

2018-12-19 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725397#comment-16725397
 ] 

Ted Yu commented on HBASE-21619:


lgtm

> Fix warning message caused by incorrect ternary operator evaluation
> ---
>
> Key: HBASE-21619
> URL: https://issues.apache.org/jira/browse/HBASE-21619
> Project: HBase
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
> Attachments: HBASE-21619.master.001.patch
>
>
> {code:title=LoadIncrementalHFiles#doBulkLoad}
> LOG.warn(
>   "Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null
>   ? hfofDir.toUri().toString()
>   : "" + ".  Does it contain files in " +
>   "subdirectories that correspond to column family names?");
> {code}
> JDK complains {{"Bulk load operation did not find any files to load in " + 
> "directory " + hfofDir != null}} is always true, which is not what is 
> intended, and that produces a wrong message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Attachment: HBASE-21614.master.001.patch

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch, 
> HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Attachment: (was: HBASE-21614.master.001.patch)

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch, 
> HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Attachment: HBASE-21614.master.001.patch

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch, 
> HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21614:
-
Attachment: (was: HBASE-21614.master.001.patch)

> RIT recovery with ServerCrashProcedure doesn't account for all regions
> --
>
> Key: HBASE-21614
> URL: https://issues.apache.org/jira/browse/HBASE-21614
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21614.master.001.patch
>
>
> Master is restarting after a previous master crashed while recovering some 
> regions from a dead server.
> Master recovers RIT for the region, however the RIT has no location (logged, 
> at least) in CONFIRM_CLOSE state. That is a potential problem #0.5 - confirm 
> where? But that should be covered by meta, so not a big deal, right. As such 
> it doesn't seem to add the region to server map anywhere
> {noformat}
> 2018-12-17 14:51:14,606 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Attach pid=38015, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_CLOSED, hasLock=false; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE to 
> rit=OFFLINE, location=null, table=t1, region=region1 to restore RIT
> {noformat}
> -However, in this case ServerCrashProcedure for the server kicks off BEFORE 
> meta is loaded-.
> -That seems to be a problem #1 - it immediately gets regions to later 
> recover, so in this case it gets nothing-.
> -I've grepped our logs for successful cases of SCP interacting with region 
> transition at startup, and in all cases the meta was loaded before SCP-.
> -Seems like a race condition-. Looks like SCP handles it
> {noformat}
> 2018-12-17 14:51:14,625 INFO  [master/:17000:becomeActiveMaster] 
> master.RegionServerTracker: Starting RegionServerTracker; 0 have existing 
> ServerCrashProcedures, 103 possibly 'live' servers, and 1 'splitting'.
> 2018-12-17 14:51:20,770 INFO  [master/:17000:becomeActiveMaster] 
> master.ServerManager: Processing expiration of server1,17020,1544636616174 on 
> ,17000,1545087053243
> 2018-12-17 14:51:20,921 INFO  [master/:17000:becomeActiveMaster] 
> assignment.AssignmentManager: Added server1,17020,1544636616174 to dead 
> servers which carryingMeta=false, submitted ServerCrashProcedure pid=111298
> 2018-12-17 14:51:30,728 INFO  [PEWorker-13] procedure.ServerCrashProcedure: 
> Start pid=111298, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=server1,17020,1544636616174, splitWal=true, 
> meta=false
> {noformat}
> Meta is only loaded 11-12 seconds later.
> If one looks at meta-loading code however, there is one more problem #2 - the 
> region is in CLOSING state, so the {{addRegionToServer}} is not going to be 
> called - it's only called for OPENED regions. 
> Expanding on the above, I've only seen SCP unblock stuck region transition at 
> startup when region started out in meta as OPEN.
> {noformat}
> 2018-12-17 14:51:42,403 INFO  [master/:17000:becomeActiveMaster] 
> assignment.RegionStateStore: Load hbase:meta entry region=region1, 
> regionState=CLOSING, lastHost=server1,17020,1544636616174, 
> regionLocation=server1,17020,1544636616174, openSeqNum=629131
> {noformat}
> SCP predictably finishes without doing anything; no other logs for this pid
> {noformat}
> 2018-12-17 14:52:19,046 INFO  [PEWorker-2] procedure2.ProcedureExecutor: 
> Finished pid=111298, state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=server1,17020,1544636616174, splitWal=true, meta=false in 58.0010sec
> {noformat}
> After that, region is still stuck trying to be closed in 
> TransitRegionStateProcedure; it's in the same state for hours including 
> across master restarts.
> {noformat}
> 2018-12-17 15:09:35,216 WARN  [PEWorker-14] 
> assignment.TransitRegionStateProcedure: Failed transition, suspend 604secs 
> pid=38015, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, REOPEN/MOVE; 
> rit=CLOSING, location=server1,17020,1544636616174; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21020) Determine WAL API changes for replication

2018-12-19 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-21020:
--
Attachment: HBASE-21020.HBASE-20952.004.patch

> Determine WAL API changes for replication
> -
>
> Key: HBASE-21020
> URL: https://issues.apache.org/jira/browse/HBASE-21020
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: HBASE-20952
>
> Attachments: HBASE-21020.HBASE-20952.001.patch, 
> HBASE-21020.HBASE-20952.002.patch, HBASE-21020.HBASE-20952.003.patch, 
> HBASE-21020.HBASE-20952.004.patch
>
>
> Spin-off of HBASE-20952.
> Ankit has started working on what he thinks a WAL API specifically for 
> Replication should look like. In his own words:
> {quote}
> At a high level, it looks,
>  * Need to abstract WAL name under WalInfo instead of Paths
>  * Abstract the WalEntryStream for FileSystem and Streaming system.
>  * Build WalStorage APIs to abstract operation on Wal.
>  * Provide the implementation of all above through corresponding WalProvider
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-21623:


 Summary: ServerCrashProcedure can stomp on a RIT for the wrong 
server
 Key: HBASE-21623
 URL: https://issues.apache.org/jira/browse/HBASE-21623
 Project: HBase
  Issue Type: Bug
Reporter: Sergey Shelukhin


A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server has already obtained the region from 
the list of regions on the server, and overwrote whatever the RIT was doing 
with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
processing.
2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; pid=151104, 
ppid=150875, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, 
hasLock=true; TransitRegionStateProcedure table=t1, region=region1, ASSIGN; 
rit=OPENING, location=oldServer,17020,1545202098577
2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=null; forceNewPlan=true, retain=false
2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
regionLocation=newServer,17020,1545202111238
...
2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false found RIT  pid=151104, ppid=150875, 
state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=newServer,17020,1545202111238, table=t1, region=region1
2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
{noformat}


Later, the RIT later overwrote the state again, it seems, and then the region 
got stuck in OPENING state forever, but I'm not sure yet if that's just due to 
this bug or if there was another bug after that. For now this can be addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21623:
-
Description: 
A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server has already obtained the region from 
the list of regions on the server, and overwrote whatever the RIT was doing 
with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
processing.
2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; pid=151104, 
ppid=150875, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, 
hasLock=true; TransitRegionStateProcedure table=t1, region=region1, ASSIGN; 
rit=OPENING, location=oldServer,17020,1545202098577
2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=null; forceNewPlan=true, retain=false
2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
regionLocation=newServer,17020,1545202111238
...
2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false found RIT  pid=151104, ppid=150875, 
state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=newServer,17020,1545202111238, table=t1, region=region1
2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
{noformat}


Later, the RIT later overwrote the state again, it seems, and then the region 
got stuck in OPENING state forever, but I'm not sure yet if that's just due to 
this bug or if there was another bug after that. For now this can be addressed.

  was:
A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server has already obtained the region from 
the list of regions on the server, and overwrote whatever the RIT was doing 
with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 
state=RUNNABLE:REGION

[jira] [Commented] (HBASE-21565) Delete dead server from dead server list too early leads to concurrent Server Crash Procedures(SCP) for a same server

2018-12-19 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725443#comment-16725443
 ] 

Hudson commented on HBASE-21565:


Results for branch branch-2
[build #1567 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1567/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1567//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1567//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1567//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Delete dead server from dead server list too early leads to concurrent Server 
> Crash Procedures(SCP) for a same server
> -
>
> Key: HBASE-21565
> URL: https://issues.apache.org/jira/browse/HBASE-21565
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Attachments: HBASE-21565.branch-2.001.patch, 
> HBASE-21565.branch-2.002.patch, HBASE-21565.master.001.patch, 
> HBASE-21565.master.002.patch, HBASE-21565.master.003.patch, 
> HBASE-21565.master.004.patch, HBASE-21565.master.005.patch, 
> HBASE-21565.master.006.patch, HBASE-21565.master.007.patch, 
> HBASE-21565.master.008.patch, HBASE-21565.master.009.patch, 
> HBASE-21565.master.010.patch
>
>
> There are 2 kinds of SCP for a same server will be scheduled during cluster 
> restart, one is ZK session timeout, the other one is new server report in 
> will cause the stale one do fail over. The only barrier for these 2 kinds of 
> SCP is check if the server is in the dead server list.
> {code}
> if (this.deadservers.isDeadServer(serverName)) {
>   LOG.warn("Expiration called on {} but crash processing already in 
> progress", serverName);
>   return false;
> }
> {code}
> But the problem is when master finish initialization, it will delete all 
> stale servers from dead server list. Thus when the SCP for ZK session timeout 
> come in, the barrier is already removed.
> Here is the logs that how this problem occur.
> {code}
> 2018-12-07,11:42:37,589 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=9, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> 2018-12-07,11:42:58,007 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=444, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> {code}
> Now we can see two SCP are scheduled for the same server.
> But the first procedure is finished after the second SCP starts.
> {code}
> 2018-12-07,11:43:08,038 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=9, 
> state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false 
> in 30.5340sec
> {code}
> Thus it will leads the problem that regions will be assigned twice.
> {code}
> 2018-12-07,12:16:33,039 WARN 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager: rit=OPEN, 
> location=c4-hadoop-tst-st28.bj,29100,1544154149607, table=test_failover, 
> region=459b3130b40caf3b8f3e1421766f4089 reported OPEN on 
> server=c4-hadoop-tst-st29.bj,29100,1544154149615 but state has otherwise
> {code}
> And here we can see the server is removed from dead server list before the 
> second SCP starts.
> {code}
> 2018-12-07,11:42:44,938 DEBUG org.apache.hadoop.hbase.master.DeadServer: 
> Removed c4-hadoop-tst-st27.bj,29100,1544153846859 ; numProcessing=3
> {code}
> Thus we should not delete dead server from dead server list immediately.
> Patch to fix this problem will be upload later.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21623:
-
Description: 
A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server had already obtained the region from 
the list of regions on the old server, and proceeded to overwrite whatever the 
RIT was doing with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
processing.
2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; pid=151104, 
ppid=150875, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, 
hasLock=true; TransitRegionStateProcedure table=t1, region=region1, ASSIGN; 
rit=OPENING, location=oldServer,17020,1545202098577
2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=null; forceNewPlan=true, retain=false
2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
regionLocation=newServer,17020,1545202111238
...
2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false found RIT  pid=151104, ppid=150875, 
state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=newServer,17020,1545202111238, table=t1, region=region1
2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
{noformat}


Later, the RIT later overwrote the state again, it seems, and then the region 
got stuck in OPENING state forever, but I'm not sure yet if that's just due to 
this bug or if there was another bug after that. For now this can be addressed.

  was:
A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server has already obtained the region from 
the list of regions on the server, and overwrote whatever the RIT was doing 
with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 
state=RUNNAB

[jira] [Updated] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21623:
-
Description: 
A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server had already obtained the region from 
the list of regions on the old server, and proceeded to overwrite whatever the 
RIT was doing with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
processing.
2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; pid=151104, 
ppid=150875, state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, 
hasLock=true; TransitRegionStateProcedure table=t1, region=region1, ASSIGN; 
rit=OPENING, location=oldServer,17020,1545202098577
2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=null; forceNewPlan=true, retain=false
2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
regionLocation=newServer,17020,1545202111238
...
2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false found RIT  pid=151104, ppid=150875, 
state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
location=newServer,17020,1545202111238, table=t1, region=region1
2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
{noformat}




Later, the RIT overwrote the state again, it seems, and then the region got 
stuck in OPENING state forever, but I'm not sure yet if that's just due to this 
bug or if there was another bug after that. For now this can be addressed.

  was:
A server died while some region was being opened on it; eventually the open 
failed, and the RIT procedure started retrying on a different server.
However, by then SCP for the dying server had already obtained the region from 
the list of regions on the old server, and proceeded to overwrite whatever the 
RIT was doing with a new server.
{noformat}
2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
...
2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
meta=false
...
2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
ppid=151104, state=RUNNABLE, hasLock=false; 
org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
{ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
oldServer,17020,1545202098577 aborting

2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
Finished subprocedure(s) of pid=151104, ppid=150875, 

[jira] [Assigned] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HBASE-21623:


Assignee: Sergey Shelukhin

> ServerCrashProcedure can stomp on a RIT for the wrong server
> 
>
> Key: HBASE-21623
> URL: https://issues.apache.org/jira/browse/HBASE-21623
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> A server died while some region was being opened on it; eventually the open 
> failed, and the RIT procedure started retrying on a different server.
> However, by then SCP for the dying server had already obtained the region 
> from the list of regions on the old server, and proceeded to overwrite 
> whatever the RIT was doing with a new server.
> {noformat}
> 2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> ...
> 2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
> Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
> meta=false
> ...
> 2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
> assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
> ppid=151104, state=RUNNABLE, hasLock=false; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
> {ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
> org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
> org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
> oldServer,17020,1545202098577 aborting
> 2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Finished subprocedure(s) of pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
> processing.
> 2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
> assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; 
> pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=oldServer,17020,1545202098577
> 2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
> assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=null; forceNewPlan=true, retain=false
> 2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
> pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
> regionLocation=newServer,17020,1545202111238
> ...
> 2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
> pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
> ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
> meta=false found RIT  pid=151104, ppid=150875, 
> state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=newServer,17020,1545202111238, table=t1, region=region1
> 2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
> pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
> {noformat}
> Later, the RIT overwrote the state again, it seems, and then the region got 
> stuck in OPENING state forever, but I'm not sure yet if that's just due to 
> this bug or if there was another bug after that. For now this can be 
> addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21623:
-
Attachment: HBASE-21623.patch

> ServerCrashProcedure can stomp on a RIT for the wrong server
> 
>
> Key: HBASE-21623
> URL: https://issues.apache.org/jira/browse/HBASE-21623
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21623.patch
>
>
> A server died while some region was being opened on it; eventually the open 
> failed, and the RIT procedure started retrying on a different server.
> However, by then SCP for the dying server had already obtained the region 
> from the list of regions on the old server, and proceeded to overwrite 
> whatever the RIT was doing with a new server.
> {noformat}
> 2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> ...
> 2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
> Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
> meta=false
> ...
> 2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
> assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
> ppid=151104, state=RUNNABLE, hasLock=false; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
> {ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
> org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
> org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
> oldServer,17020,1545202098577 aborting
> 2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Finished subprocedure(s) of pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
> processing.
> 2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
> assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; 
> pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=oldServer,17020,1545202098577
> 2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
> assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=null; forceNewPlan=true, retain=false
> 2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
> pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
> regionLocation=newServer,17020,1545202111238
> ...
> 2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
> pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
> ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
> meta=false found RIT  pid=151104, ppid=150875, 
> state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=newServer,17020,1545202111238, table=t1, region=region1
> 2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
> pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
> {noformat}
> Later, the RIT overwrote the state again, it seems, and then the region got 
> stuck in OPENING state forever, but I'm not sure yet if that's just due to 
> this bug or if there was another bug after that. For now this can be 
> addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-21623:
-
Status: Patch Available  (was: Open)

[~Apache9] [~stack] can you take a look? a small patch

> ServerCrashProcedure can stomp on a RIT for the wrong server
> 
>
> Key: HBASE-21623
> URL: https://issues.apache.org/jira/browse/HBASE-21623
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21623.patch
>
>
> A server died while some region was being opened on it; eventually the open 
> failed, and the RIT procedure started retrying on a different server.
> However, by then SCP for the dying server had already obtained the region 
> from the list of regions on the old server, and proceeded to overwrite 
> whatever the RIT was doing with a new server.
> {noformat}
> 2018-12-18 23:06:03,160 INFO  [PEWorker-14] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[{pid=151404, ppid=151104, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
> ...
> 2018-12-18 23:06:38,208 INFO  [PEWorker-10] procedure.ServerCrashProcedure: 
> Start pid=151632, state=RUNNABLE:SERVER_CRASH_START, hasLock=true; 
> ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
> meta=false
> ...
> 2018-12-18 23:06:41,953 WARN  [RSProcedureDispatcher-pool4-t115] 
> assignment.RegionRemoteProcedureBase: The remote operation pid=151404, 
> ppid=151104, state=RUNNABLE, hasLock=false; 
> org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure for region 
> {ENCODED => region1, ... } to server oldServer,17020,1545202098577 failed
> org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: 
> org.apache.hadoop.hbase.regionserver.RegionServerAbortedException: Server 
> oldServer,17020,1545202098577 aborting
> 2018-12-18 23:06:42,485 INFO  [PEWorker-5] procedure2.ProcedureExecutor: 
> Finished subprocedure(s) of pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; resume parent 
> processing.
> 2018-12-18 23:06:42,485 INFO  [PEWorker-13] 
> assignment.TransitRegionStateProcedure: Retry=1 of max=2147483647; 
> pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=oldServer,17020,1545202098577
> 2018-12-18 23:06:42,500 INFO  [PEWorker-13] 
> assignment.TransitRegionStateProcedure: Starting pid=151104, ppid=150875, 
> state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=null; forceNewPlan=true, retain=false
> 2018-12-18 23:06:42,657 INFO  [PEWorker-2] assignment.RegionStateStore: 
> pid=151104 updating hbase:meta row=region1, regionState=OPENING, 
> regionLocation=newServer,17020,1545202111238
> ...
> 2018-12-18 23:06:43,094 INFO  [PEWorker-4] procedure.ServerCrashProcedure: 
> pid=151632, state=RUNNABLE:SERVER_CRASH_ASSIGN, hasLock=true; 
> ServerCrashProcedure server=oldServer,17020,1545202098577, splitWal=true, 
> meta=false found RIT  pid=151104, ppid=150875, 
> state=WAITING:REGION_STATE_TRANSITION_CONFIRM_OPENED, hasLock=true; 
> TransitRegionStateProcedure table=t1, region=region1, ASSIGN; rit=OPENING, 
> location=newServer,17020,1545202111238, table=t1, region=region1
> 2018-12-18 23:06:43,094 INFO  [PEWorker-4] assignment.RegionStateStore: 
> pid=151104 updating hbase:meta row=region1, regionState=ABNORMALLY_CLOSED
> {noformat}
> Later, the RIT overwrote the state again, it seems, and then the region got 
> stuck in OPENING state forever, but I'm not sure yet if that's just due to 
> this bug or if there was another bug after that. For now this can be 
> addressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20984) Add/Modify test case to check custom hbase.wal.dir outside hdfs filesystem

2018-12-19 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725459#comment-16725459
 ] 

Sakthi commented on HBASE-20984:


I find TestFromClientSide3, TestSnapshotDFSTemporaryDirectory, 
TestServerCrashProcedureWithReplicas, TestAdmin1 very flakey. And I'm doubtful 
of the remaining as well, as they passed locally on my desktop.

> Add/Modify test case to check custom hbase.wal.dir outside hdfs filesystem
> --
>
> Key: HBASE-20984
> URL: https://issues.apache.org/jira/browse/HBASE-20984
> Project: HBase
>  Issue Type: Bug
>  Components: test, wal
>Reporter: Sakthi
>Assignee: Sakthi
>Priority: Minor
> Attachments: hbase-20984.master.001.patch, 
> hbase-20984.master.002.patch, hbase-20984.master.003.patch
>
>
> The current setup in TestWALFactory tries to create custom WAL directory 
> outside hdfs but ends up creating a custom WAL directory inside hdfs. In 
> TestWALFactory.java:
> {code:java}
> public static void setUpBeforeClass() throws Exception {
> CommonFSUtils.setWALRootDir(TEST_UTIL.getConfiguration(), new 
> Path("file:///tmp/wal")); // A local filesystem WAL is attempted
> ...
> hbaseDir = TEST_UTIL.createRootDir();
> hbaseWALDir = TEST_UTIL.createWALRootDir(); // But a directory inside 
> hdfs is created here using HBaseTestingUtility#getNewDataTestDirOnTestFS
> }
> {code}
> The change was made in HBASE-20723



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725464#comment-16725464
 ] 

Hadoop QA commented on HBASE-21614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}235m 19s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}282m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestServerCrashProcedureWithReplicas |
|   | hadoop.hbase.regionserver.TestRegionServerAbortTimeout |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952397/HBASE-21614.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 411480c5689c 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15331/artifact/patchproce

[jira] [Commented] (HBASE-21614) RIT recovery with ServerCrashProcedure doesn't account for all regions

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725508#comment-16725508
 ] 

Hadoop QA commented on HBASE-21614:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 23s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.TestMetaTableAccessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21614 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952422/HBASE-21614.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux d6cc2f071766 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15332/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15332/testReport/ |
| Max. process+thread count | 4252 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server

[jira] [Updated] (HBASE-21617) HBase Bytes.putBigDecimal error

2018-12-19 Thread apcahephoenix (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

apcahephoenix updated HBASE-21617:
--
Attachment: TestBytes.java

> HBase Bytes.putBigDecimal error
> ---
>
> Key: HBASE-21617
> URL: https://issues.apache.org/jira/browse/HBASE-21617
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.1.0, 2.0.0, 2.1.1
> Environment: JDK 1.8
>Reporter: apcahephoenix
>Priority: Major
> Attachments: TestBytes.java
>
>
> *hbase-common/*
> *org.apache.hadoop.hbase.util.Bytes:*
> public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   if (bytes == null){
>     return offset;
>   }
>   byte[] valueBytes = val.unscaledValue().toByteArray();
>   byte[] result = new byte[valueBytes.length + SIZEOF_INT];
>   offset = putInt(result, offset, val.scale());
> {color:#d04437}return putBytes(result, offset, valueBytes, 0, 
> valueBytes.length); // this one, bytes is not used{color}
>  }
> *Test:*
>  byte[] bytes = new byte[64];
>  BigDecimal bigDecimal = new BigDecimal("100.10");
>  Bytes.putBigDecimal(bytes, 4, bigDecimal);
>  System.out.println(Arrays.toString(bytes)); // invalid
> *Suggest:*
>  public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   byte[] valueBytes = toBytes(val);
>   return putBytes(bytes, offset, valueBytes, 0, valueBytes.length);
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21617) HBase Bytes.putBigDecimal error

2018-12-19 Thread apcahephoenix (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

apcahephoenix updated HBASE-21617:
--
Attachment: Bytes.java

> HBase Bytes.putBigDecimal error
> ---
>
> Key: HBASE-21617
> URL: https://issues.apache.org/jira/browse/HBASE-21617
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.1.0, 2.0.0, 2.1.1
> Environment: JDK 1.8
>Reporter: apcahephoenix
>Priority: Major
> Attachments: Bytes.java, TestBytes.java
>
>
> *hbase-common/*
> *org.apache.hadoop.hbase.util.Bytes:*
> public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   if (bytes == null){
>     return offset;
>   }
>   byte[] valueBytes = val.unscaledValue().toByteArray();
>   byte[] result = new byte[valueBytes.length + SIZEOF_INT];
>   offset = putInt(result, offset, val.scale());
> {color:#d04437}return putBytes(result, offset, valueBytes, 0, 
> valueBytes.length); // this one, bytes is not used{color}
>  }
> *Test:*
>  byte[] bytes = new byte[64];
>  BigDecimal bigDecimal = new BigDecimal("100.10");
>  Bytes.putBigDecimal(bytes, 4, bigDecimal);
>  System.out.println(Arrays.toString(bytes)); // invalid
> *Suggest:*
>  public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   byte[] valueBytes = toBytes(val);
>   return putBytes(bytes, offset, valueBytes, 0, valueBytes.length);
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21401) Sanity check when constructing the KeyValue

2018-12-19 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725524#comment-16725524
 ] 

Zheng Hu commented on HBASE-21401:
--

Sorry for pinging again, [~stack] boss.  I'm trying to push this issue as fast 
as possible, so we can make this related issue finished. 

> Sanity check when constructing the KeyValue
> ---
>
> Key: HBASE-21401
> URL: https://issues.apache.org/jira/browse/HBASE-21401
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 3.0.0, 2.2.0, 2.1.3, 2.0.5
>
> Attachments: HBASE-21401.v1.patch, HBASE-21401.v2.patch, 
> HBASE-21401.v3.patch, HBASE-21401.v4.patch, HBASE-21401.v4.patch, 
> HBASE-21401.v5.patch, HBASE-21401.v6.patch, HBASE-21401.v7.patch
>
>
> In KeyValueDecoder & ByteBuffKeyValueDecoder,  we pass a byte buffer to 
> initialize the Cell without a sanity check (check each field's offset&len 
> exceed the byte buffer or not), so ArrayIndexOutOfBoundsException may happen 
> when read the cell's fields, such as HBASE-21379,  it's hard to debug this 
> kind of bug. 
> An earlier check will help to find such kind of bugs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21617) HBase Bytes.putBigDecimal error

2018-12-19 Thread apcahephoenix (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725526#comment-16725526
 ] 

apcahephoenix commented on HBASE-21617:
---

I don't know how to with a UT, please let me know if you have any requirements. 
My problems and modifications have been explained in the description, now I 
will add the modified files and unit test files.

 

{color:#d04437}*src files in the hbase-common/*{color}

*Bytes.putBigDecimal()*

*TestBytes.testPutBigDecimal()*

> HBase Bytes.putBigDecimal error
> ---
>
> Key: HBASE-21617
> URL: https://issues.apache.org/jira/browse/HBASE-21617
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.1.0, 2.0.0, 2.1.1
> Environment: JDK 1.8
>Reporter: apcahephoenix
>Priority: Major
> Attachments: Bytes.java, TestBytes.java
>
>
> *hbase-common/*
> *org.apache.hadoop.hbase.util.Bytes:*
> public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   if (bytes == null){
>     return offset;
>   }
>   byte[] valueBytes = val.unscaledValue().toByteArray();
>   byte[] result = new byte[valueBytes.length + SIZEOF_INT];
>   offset = putInt(result, offset, val.scale());
> {color:#d04437}return putBytes(result, offset, valueBytes, 0, 
> valueBytes.length); // this one, bytes is not used{color}
>  }
> *Test:*
>  byte[] bytes = new byte[64];
>  BigDecimal bigDecimal = new BigDecimal("100.10");
>  Bytes.putBigDecimal(bytes, 4, bigDecimal);
>  System.out.println(Arrays.toString(bytes)); // invalid
> *Suggest:*
>  public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   byte[] valueBytes = toBytes(val);
>   return putBytes(bytes, offset, valueBytes, 0, valueBytes.length);
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21623) ServerCrashProcedure can stomp on a RIT for the wrong server

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725534#comment-16725534
 ] 

Hadoop QA commented on HBASE-21623:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 31s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}115m 
18s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21623 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12952432/HBASE-21623.patch |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c60be26fb3b1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8991877bb2 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15334/testReport/ |
| Max. process+thread count | 5075 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15334

[jira] [Updated] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-21618:
---
Attachment: HBASE-21618.master.001.patch

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Major
> Attachments: HBASE-21618.master.001.patch
>
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-21618:
---
Status: Patch Available  (was: Open)

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Major
> Attachments: HBASE-21618.master.001.patch
>
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725547#comment-16725547
 ] 

Guanghao Zhang commented on HBASE-21618:


ProtobufUtil#toScan method will change the includeStopRow to true for old 
client which doesn't has this flag. But the new client only set this flag when 
scan.includeStopRow is true.
{code:java}
// protoScan ==> scan
if (proto.hasIncludeStopRow()) {
  includeStopRow = proto.getIncludeStopRow();
} else {
  // old client without this flag, we should consider start=end as a get.
  if (ClientUtil.areScanStartRowAndStopRowEqual(startRow, stopRow)) {
includeStopRow = true;
  }
}

// scan ==> protoScan
if (scan.includeStopRow()) {
  scanBuilder.setIncludeStopRow(true);
}
{code}

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Major
> Attachments: HBASE-21618.master.001.patch
>
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21618:
-
Fix Version/s: 2.0.5
   2.1.3
   1.4.10
   1.2.10
   2.2.0
   1.5.0
   3.0.0

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.2.10, 1.4.10, 2.1.3, 2.0.5
>
> Attachments: HBASE-21618.master.001.patch
>
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21618:
-
Fix Version/s: (was: 1.2.10)

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5
>
> Attachments: HBASE-21618.master.001.patch
>
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21618) Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) returns one result

2018-12-19 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21618:
-
Priority: Critical  (was: Major)

> Scan with the same startRow(inclusive=true) and stopRow(inclusive=false) 
> returns one result
> ---
>
> Key: HBASE-21618
> URL: https://issues.apache.org/jira/browse/HBASE-21618
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 2.0.2
> Environment: hbase server 2.0.2
> hbase client 2.0.0
>Reporter: Jermy Li
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10, 2.1.3, 2.0.5
>
> Attachments: HBASE-21618.master.001.patch
>
>
> I expect the following code to return none result, but still return a row:
> {code:java}
> byte[] rowkey = "some key existed";
> Scan scan = new Scan();
> scan.withStartRow(rowkey, true);
> scan.withStopRow(rowkey, false);
> htable.getScanner(scan);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21617) HBase Bytes.putBigDecimal error

2018-12-19 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725552#comment-16725552
 ] 

Zheng Hu commented on HBASE-21617:
--

You can try to put this patch into https://reviews.apache.org , so  others can 
help to review. 

> HBase Bytes.putBigDecimal error
> ---
>
> Key: HBASE-21617
> URL: https://issues.apache.org/jira/browse/HBASE-21617
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.1.0, 2.0.0, 2.1.1
> Environment: JDK 1.8
>Reporter: apcahephoenix
>Priority: Major
> Attachments: Bytes.java, TestBytes.java
>
>
> *hbase-common/*
> *org.apache.hadoop.hbase.util.Bytes:*
> public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   if (bytes == null){
>     return offset;
>   }
>   byte[] valueBytes = val.unscaledValue().toByteArray();
>   byte[] result = new byte[valueBytes.length + SIZEOF_INT];
>   offset = putInt(result, offset, val.scale());
> {color:#d04437}return putBytes(result, offset, valueBytes, 0, 
> valueBytes.length); // this one, bytes is not used{color}
>  }
> *Test:*
>  byte[] bytes = new byte[64];
>  BigDecimal bigDecimal = new BigDecimal("100.10");
>  Bytes.putBigDecimal(bytes, 4, bigDecimal);
>  System.out.println(Arrays.toString(bytes)); // invalid
> *Suggest:*
>  public static int putBigDecimal(byte[] bytes, int offset, BigDecimal val) {
>   byte[] valueBytes = toBytes(val);
>   return putBytes(bytes, offset, valueBytes, 0, valueBytes.length);
>  }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2018-12-19 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21620:
-
Attachment: test.patch

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 1.4.8
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Priority: Major
> Attachments: HBaseImportData.java, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2018-12-19 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725563#comment-16725563
 ] 

Zheng Hu commented on HBASE-21620:
--

[~mohamed.meeran],  I written a UT for it,  Yeah , you are right,  it's a 
bug will provide a patch for this. 

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 1.4.8
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Priority: Major
> Attachments: HBaseImportData.java, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21565) Delete dead server from dead server list too early leads to concurrent Server Crash Procedures(SCP) for a same server

2018-12-19 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725561#comment-16725561
 ] 

Guanghao Zhang commented on HBASE-21565:


[~stack] There is another issue HBASE-20976 for this problem in branch-2.0 and 
branch-2.1.

> Delete dead server from dead server list too early leads to concurrent Server 
> Crash Procedures(SCP) for a same server
> -
>
> Key: HBASE-21565
> URL: https://issues.apache.org/jira/browse/HBASE-21565
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Attachments: HBASE-21565.branch-2.001.patch, 
> HBASE-21565.branch-2.002.patch, HBASE-21565.master.001.patch, 
> HBASE-21565.master.002.patch, HBASE-21565.master.003.patch, 
> HBASE-21565.master.004.patch, HBASE-21565.master.005.patch, 
> HBASE-21565.master.006.patch, HBASE-21565.master.007.patch, 
> HBASE-21565.master.008.patch, HBASE-21565.master.009.patch, 
> HBASE-21565.master.010.patch
>
>
> There are 2 kinds of SCP for a same server will be scheduled during cluster 
> restart, one is ZK session timeout, the other one is new server report in 
> will cause the stale one do fail over. The only barrier for these 2 kinds of 
> SCP is check if the server is in the dead server list.
> {code}
> if (this.deadservers.isDeadServer(serverName)) {
>   LOG.warn("Expiration called on {} but crash processing already in 
> progress", serverName);
>   return false;
> }
> {code}
> But the problem is when master finish initialization, it will delete all 
> stale servers from dead server list. Thus when the SCP for ZK session timeout 
> come in, the barrier is already removed.
> Here is the logs that how this problem occur.
> {code}
> 2018-12-07,11:42:37,589 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=9, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> 2018-12-07,11:42:58,007 INFO 
> org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Start pid=444, 
> state=RUNNABLE:SERVER_CRASH_START, hasLock=true; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false
> {code}
> Now we can see two SCP are scheduled for the same server.
> But the first procedure is finished after the second SCP starts.
> {code}
> 2018-12-07,11:43:08,038 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=9, 
> state=SUCCESS, hasLock=false; ServerCrashProcedure 
> server=c4-hadoop-tst-st27.bj,29100,1544153846859, splitWal=true, meta=false 
> in 30.5340sec
> {code}
> Thus it will leads the problem that regions will be assigned twice.
> {code}
> 2018-12-07,12:16:33,039 WARN 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager: rit=OPEN, 
> location=c4-hadoop-tst-st28.bj,29100,1544154149607, table=test_failover, 
> region=459b3130b40caf3b8f3e1421766f4089 reported OPEN on 
> server=c4-hadoop-tst-st29.bj,29100,1544154149615 but state has otherwise
> {code}
> And here we can see the server is removed from dead server list before the 
> second SCP starts.
> {code}
> 2018-12-07,11:42:44,938 DEBUG org.apache.hadoop.hbase.master.DeadServer: 
> Removed c4-hadoop-tst-st27.bj,29100,1544153846859 ; numProcessing=3
> {code}
> Thus we should not delete dead server from dead server list immediately.
> Patch to fix this problem will be upload later.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2018-12-19 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu reassigned HBASE-21620:


Assignee: Zheng Hu

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 1.4.8
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBaseImportData.java, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21620) Problem in scan query when using more than one column prefix filter in some cases.

2018-12-19 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725567#comment-16725567
 ] 

Zheng Hu commented on HBASE-21620:
--

Catch the stack in regionserver : 
{code}
"RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39128" #154 daemon prio=5 
os_prio=0 tid=0x7fb875839000 nid=0x202d runnable [0x7fb7535f3000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.hadoop.hbase.filter.FilterListBase.compareCell(FilterListBase.java:86)
at 
org.apache.hadoop.hbase.filter.FilterListWithOR.getNextCellHint(FilterListWithOR.java:371)
at 
org.apache.hadoop.hbase.filter.FilterList.getNextCellHint(FilterList.java:265)
at 
org.apache.hadoop.hbase.regionserver.querymatcher.UserScanQueryMatcher.getNextKeyHint(UserScanQueryMatcher.java:96)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:686)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:152)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6292)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6452)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6224)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2882)
- locked <0x0006cc21a338> (a 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3131)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36613)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2380)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
{code}

> Problem in scan query when using more than one column prefix filter in some 
> cases.
> --
>
> Key: HBASE-21620
> URL: https://issues.apache.org/jira/browse/HBASE-21620
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 1.4.8
> Environment: hbase-1.4.8, hbase-1.4.9
> hadoop-2.7.3
>Reporter: Mohamed Mohideen Meeran
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBaseImportData.java, file.txt, test.patch
>
>
> In some cases, unable to get the scan results when using more than one column 
> prefix filter.
> Attached a java file to import the data which we used and a text file 
> containing the values..
> While executing the following query (hbase shell as well as java program) it 
> is waiting indefinitely and after RPC timeout we got the following error.. 
> Also we noticed high cpu, high load average and very frequent young gc  in 
> the region server containing this row...
> scan 'namespace:tablename',\{STARTROW => 'test',ENDROW => 'test', FILTER => 
> "ColumnPrefixFilter('1544770422942010001_') OR 
> ColumnPrefixFilter('1544769883529010001_')"}
> ROW                                                  COLUMN+CELL              
>                                                      ERROR: Call id=18, 
> waitTime=60005, rpcTimetout=6
>  
> Note: Table scan operation and scan with a single column prefix filter works 
> fine in this case.
> When we check the same query in hbase-1.2.5 it is working fine.
> Can you please help me on this..



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21020) Determine WAL API changes for replication

2018-12-19 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725574#comment-16725574
 ] 

Hadoop QA commented on HBASE-21020:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-20952 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
30s{color} | {color:green} HBASE-20952 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
13s{color} | {color:green} HBASE-20952 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
10s{color} | {color:green} HBASE-20952 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
57s{color} | {color:blue} hbase-server in HBASE-20952 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} HBASE-20952 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 12s{color} 
| {color:red} root generated 1 new + 1148 unchanged - 1 fixed = 1149 total (was 
1149) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m  
3s{color} | {color:red} root: The patch generated 6 new + 191 unchanged - 13 
fixed = 197 total (was 204) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 8 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} hbase-server generated 0 new + 0 unchanged - 1 fixed 
= 0 total (was 1) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hbase-server generated 10 new + 2 unchanged - 0 fixed 
= 12 total (was 2) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  2m 
30s{color} | {color:red} root generated 10 new + 6 unchanged - 0 fixed = 16 
total (was 6) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}239m 14s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 3s{color} | {c

[jira] [Commented] (HBASE-21621) Reversed scan does not return expected number of rows

2018-12-19 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725580#comment-16725580
 ] 

Guanghao Zhang commented on HBASE-21621:


Thanks [~nihaljain.cs] for the nice ut. The problem may be 
StoreScanner#trySwitchToStreamRead method should new a ReversedKeyValueHeap for 
revsered scan... Let me prepare a patch for this.

> Reversed scan does not return expected  number of rows
> --
>
> Key: HBASE-21621
> URL: https://issues.apache.org/jira/browse/HBASE-21621
> Project: HBase
>  Issue Type: Bug
>  Components: scan
>Affects Versions: 3.0.0, 2.1.1
>Reporter: Nihal Jain
>Priority: Critical
> Attachments: HBASE-21621.master.UT.patch
>
>
> *Steps to reproduce*
>  # Create a table and put some data into it (data should be big enough, say N 
> rows)
>  # Flush the table
>  # Scan the table with reversed set to true
> *Expected Result*
> N rows should be retrieved in reversed order
> *Actual Result*
> Less than expected number of rows is retrieved with following error in logs
> {noformat}
> 2018-12-19 21:55:32,944 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> regionserver.StoreScanner(1000): Switch to stream read (scanned=262214 bytes) 
> of cf
> 2018-12-19 21:55:32,955 ERROR 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> ipc.RpcServer(471): Unexpected throwable object 
> java.lang.AssertionError: Key 
> \x00\x00\x00\x00\x00\x00\x00\x09/cf:a/1545236714675/Put/vlen=131072/seqid=4 
> followed by a error order key 
> \x00\x00\x00\x00\x00\x00\x00\x0F/cf:a/1545236715545/Put/vlen=131072/seqid=8 
> in cf cf in reversed scan
>   at 
> org.apache.hadoop.hbase.regionserver.ReversedStoreScanner.checkScanOrder(ReversedStoreScanner.java:105)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:568)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6598)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6762)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6535)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3252)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3501)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> 2018-12-19 21:55:32,955 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> ipc.CallRunner(142): callId: 508 service: ClientService methodName: Scan 
> size: 47 connection: 127.0.0.1:48328 deadline: 1545236792955, 
> exception=java.io.IOException: Key 
> \x00\x00\x00\x00\x00\x00\x00\x09/cf:a/1545236714675/Put/vlen=131072/seqid=4 
> followed by a error order key 
> \x00\x00\x00\x00\x00\x00\x00\x0F/cf:a/1545236715545/Put/vlen=131072/seqid=8 
> in cf cf in reversed scan
> 2018-12-19 21:55:33,060 DEBUG 
> [RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=39007] 
> ipc.CallRunner(142): callId: 511 service: ClientService methodName: Scan 
> size: 47 connection: 127.0.0.1:48328 deadline: 1545236792955, 
> exception=org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; 
> request=scanner_id: 2421102592655360183 number_of_rows: 2147483647 
> close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true track_scan_metrics: false renew: false
> 2018-12-19 21:55:33,060 DEBUG [Time-limited test] 
> client.ScannerCallableWithReplicas(200): Scan with primary region returns 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: 
> org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected 
> nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 
> 2421102592655360183 number_of_rows: 2147483647 close_scanner: false 
> next_call_seq: 0 client_handles_partials: true client_handles_heartbeats: 
> true track_scan_metrics: false renew: false
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.checkScanNextCallSeq(RSRpcServices.java:3122)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3455)
>   at 
> org.ap

  1   2   >