[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607939#comment-16607939
 ] 

Hadoop QA commented on HBASE-16458:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
10s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hbase-backup: The patch generated 11 new + 0 unchanged 
- 0 fixed = 11 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 28s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}119m 
10s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
3s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-16458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938936/16458.v2.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  com

[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607992#comment-16607992
 ] 

Hudson commented on HBASE-20307:


Results for branch branch-1.3
[build #458 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> LoadTestTool prints too much zookeeper logging
> --
>
> Key: HBASE-20307
> URL: https://issues.apache.org/jira/browse/HBASE-20307
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Colin Garcia
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3
>
> Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch
>
>
> When running ltt there is a ton of ZK related cruft that I probably don't 
> care about. Hide it behind -verbose flag or point people at log4j 
> configuration but don't print it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607991#comment-16607991
 ] 

Hudson commented on HBASE-21166:


Results for branch branch-1.3
[build #458 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/458//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
> -
>
> Key: HBASE-21166
> URL: https://issues.apache.org/jira/browse/HBASE-21166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7
>
> Attachments: HBASE-21166.branch-1.001.patch
>
>
> CoprocessorHConnections are created for example during a call of 
> CoprocessorHost$Environent.getTable(...). The region server already know the 
> cluster id, yet, we're resolving it over and over again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607995#comment-16607995
 ] 

Hudson commented on HBASE-21166:


Results for branch branch-1.2
[build #465 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
> -
>
> Key: HBASE-21166
> URL: https://issues.apache.org/jira/browse/HBASE-21166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7
>
> Attachments: HBASE-21166.branch-1.001.patch
>
>
> CoprocessorHConnections are created for example during a call of 
> CoprocessorHost$Environent.getTable(...). The region server already know the 
> cluster id, yet, we're resolving it over and over again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607996#comment-16607996
 ] 

Hudson commented on HBASE-20307:


Results for branch branch-1.2
[build #465 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/465//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> LoadTestTool prints too much zookeeper logging
> --
>
> Key: HBASE-20307
> URL: https://issues.apache.org/jira/browse/HBASE-20307
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Colin Garcia
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3
>
> Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch
>
>
> When running ltt there is a ton of ZK related cruft that I probably don't 
> care about. Hide it behind -verbose flag or point people at log4j 
> configuration but don't print it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608000#comment-16608000
 ] 

Hudson commented on HBASE-20307:


Results for branch branch-1
[build #451 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> LoadTestTool prints too much zookeeper logging
> --
>
> Key: HBASE-20307
> URL: https://issues.apache.org/jira/browse/HBASE-20307
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Colin Garcia
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3
>
> Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch
>
>
> When running ltt there is a ton of ZK related cruft that I probably don't 
> care about. Hide it behind -verbose flag or point people at log4j 
> configuration but don't print it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16607999#comment-16607999
 ] 

Hudson commented on HBASE-21166:


Results for branch branch-1
[build #451 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/451//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
> -
>
> Key: HBASE-21166
> URL: https://issues.apache.org/jira/browse/HBASE-21166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7
>
> Attachments: HBASE-21166.branch-1.001.patch
>
>
> CoprocessorHConnections are created for example during a call of 
> CoprocessorHost$Environent.getTable(...). The region server already know the 
> cluster id, yet, we're resolving it over and over again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21138) Close HRegion instance at the end of every test in TestHRegion

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608010#comment-16608010
 ] 

Hudson commented on HBASE-21138:


Results for branch branch-1.4
[build #453 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Close HRegion instance at the end of every test in TestHRegion
> --
>
> Key: HBASE-21138
> URL: https://issues.apache.org/jira/browse/HBASE-21138
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mingliang Liu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.8
>
> Attachments: HBASE-21138.000.patch, HBASE-21138.001.patch, 
> HBASE-21138.002.patch, HBASE-21138.003.patch, HBASE-21138.004.patch, 
> HBASE-21138.branch-1.004.patch, HBASE-21138.branch-1.004.patch, 
> HBASE-21138.branch-2.004.patch
>
>
> TestHRegion has over 100 tests.
> The following is from one subtest:
> {code}
>   public void testCompactionAffectedByScanners() throws Exception {
> byte[] family = Bytes.toBytes("family");
> this.region = initHRegion(tableName, method, CONF, family);
> {code}
> this.region is not closed at the end of the subtest.
> testToShowNPEOnRegionScannerReseek is another example.
> Every subtest should use the following construct toward the end:
> {code}
> } finally {
>   HBaseTestingUtility.closeRegionAndWAL(this.region);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608012#comment-16608012
 ] 

Hudson commented on HBASE-20307:


Results for branch branch-1.4
[build #453 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> LoadTestTool prints too much zookeeper logging
> --
>
> Key: HBASE-20307
> URL: https://issues.apache.org/jira/browse/HBASE-20307
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Colin Garcia
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3
>
> Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch
>
>
> When running ltt there is a ton of ZK related cruft that I probably don't 
> care about. Hide it behind -verbose flag or point people at log4j 
> configuration but don't print it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21166) Creating a CoprocessorHConnection re-retrieves the cluster id from ZK

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608011#comment-16608011
 ] 

Hudson commented on HBASE-21166:


Results for branch branch-1.4
[build #453 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/453//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Creating a CoprocessorHConnection re-retrieves the cluster id from ZK
> -
>
> Key: HBASE-21166
> URL: https://issues.apache.org/jira/browse/HBASE-21166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.5.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.8, 1.2.7
>
> Attachments: HBASE-21166.branch-1.001.patch
>
>
> CoprocessorHConnections are created for example during a call of 
> CoprocessorHost$Environent.getTable(...). The region server already know the 
> cluster id, yet, we're resolving it over and over again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16458:
---
Attachment: 16458.v4.txt

> Shorten backup / restore test execution time
> 
>
> Key: HBASE-16458
> URL: https://issues.apache.org/jira/browse/HBASE-16458
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: backup
> Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, 
> 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 
> 16458.v2.txt, 16458.v3.txt, 16458.v4.txt, HBASE-16458-v1.patch, 
> HBASE-16458-v2.patch
>
>
> Below was timing information for all the backup / restore tests (today's 
> result):
> {code}
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Running org.apache.hadoop.hbase.backup.TestBackupAdmin
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupAdmin
> Running org.apache.hadoop.hbase.backup.TestHFileArchiving
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - 
> in org.apache.hadoop.hbase.backup.TestHFileArchiving
> Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - 
> in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Running org.apache.hadoop.hbase.backup.TestBackupDescribe
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDescribe
> Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Running org.apache.hadoop.hbase.backup.TestRemoteBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteBackup
> Running org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - 
> in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Running org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Running org.apache.hadoop.hbase.backup.TestRemoteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteRestore
> Running org.apache.hadoop.hbase.backup.TestFullRestore
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec 
> - in org.apache.hadoop.hbase.backup.TestFullRestore
> Running org.apache.hadoop.hbase.backup.TestFullBackupSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSet
> Running org.apache.hadoop.hbase.backup.TestBackupDelete
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDelete
> Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable
> Running 
> org.apac

[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608029#comment-16608029
 ] 

Ted Yu commented on HBASE-16458:


16458.v4.txt for checkstyle warnings.

> Shorten backup / restore test execution time
> 
>
> Key: HBASE-16458
> URL: https://issues.apache.org/jira/browse/HBASE-16458
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: backup
> Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, 
> 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 
> 16458.v2.txt, 16458.v3.txt, 16458.v4.txt, HBASE-16458-v1.patch, 
> HBASE-16458-v2.patch
>
>
> Below was timing information for all the backup / restore tests (today's 
> result):
> {code}
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Running org.apache.hadoop.hbase.backup.TestBackupAdmin
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupAdmin
> Running org.apache.hadoop.hbase.backup.TestHFileArchiving
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - 
> in org.apache.hadoop.hbase.backup.TestHFileArchiving
> Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - 
> in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Running org.apache.hadoop.hbase.backup.TestBackupDescribe
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDescribe
> Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Running org.apache.hadoop.hbase.backup.TestRemoteBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteBackup
> Running org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - 
> in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Running org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Running org.apache.hadoop.hbase.backup.TestRemoteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteRestore
> Running org.apache.hadoop.hbase.backup.TestFullRestore
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec 
> - in org.apache.hadoop.hbase.backup.TestFullRestore
> Running org.apache.hadoop.hbase.backup.TestFullBackupSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSet
> Running org.apache.hadoop.hbase.backup.TestBackupDelete
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDelete
> Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - 
> in org.apache.hadoop.

[jira] [Created] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-21172:
-

 Summary: Reimplement the retry backoff logic for 
ReopenTableRegionsProcedure
 Key: HBASE-21172
 URL: https://issues.apache.org/jira/browse/HBASE-21172
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


Now we just do a blocking sleep in the execute method, and there is no 
exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21172:
--
Fix Version/s: 2.0.3
   2.1.1
   2.2.0
   3.0.0

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21172:
--
Component/s: proc-v2
 amv2

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reassigned HBASE-21172:
-

Assignee: Duo Zhang

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20307) LoadTestTool prints too much zookeeper logging

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608052#comment-16608052
 ] 

Hudson commented on HBASE-20307:


Results for branch master
[build #480 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/480/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> LoadTestTool prints too much zookeeper logging
> --
>
> Key: HBASE-20307
> URL: https://issues.apache.org/jira/browse/HBASE-20307
> Project: HBase
>  Issue Type: Bug
>  Components: tooling
>Reporter: Mike Drob
>Assignee: Colin Garcia
>Priority: Major
>  Labels: beginner
> Fix For: 3.0.0, 1.5.0, 1.3.3, 1.2.8, 2.2.0, 1.4.8, 2.1.1, 2.0.3
>
> Attachments: HBASE-20307.000.patch, HBASE-20307.001.patch
>
>
> When running ltt there is a ton of ZK related cruft that I probably don't 
> care about. Hide it behind -verbose flag or point people at log4j 
> configuration but don't print it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608053#comment-16608053
 ] 

Hudson commented on HBASE-21144:


Results for branch master
[build #480 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/480/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, HBASE-21144-v1.patch, 
> HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608051#comment-16608051
 ] 

Hudson commented on HBASE-21001:


Results for branch master
[build #480 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/480/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/480//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> ReplicationObserver fails to load in HBase 2.0.0
> 
>
> Key: HBASE-21001
> URL: https://issues.apache.org/jira/browse/HBASE-21001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 2.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: replication
> Attachments: HBASE-21001.branch-2.0.001.patch, 
> HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, 
> HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, 
> HBASE-21001.master.004.patch
>
>
> ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of 
> data for replication of bulk loaded hfiles".
> I tried to enable bulk loading replication feature 
> (hbase.replication.bulkload.enabled=true and configure 
> hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer 
> started with the following error:
> {quote}
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 ERROR 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not 
> of type
> RegionServerCoprocessor. Check the configuration of 
> hbase.coprocessor.regionserver.classes
> 2018-08-02 18:20:36,366 ERROR 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor 
> ReplicationObserver
> {quote}
> It looks like this was broken by HBASE-17732 to me, but I could be wrong. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21172:
--
Status: Patch Available  (was: Open)

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21172.patch
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21172:
--
Attachment: HBASE-21172.patch

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21172.patch
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608055#comment-16608055
 ] 

Duo Zhang commented on HBASE-21172:
---

Review board link:

https://reviews.apache.org/r/68674/

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21172.patch
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21142) ReopenTableRegionsProcedure sometimes hangs

2018-09-08 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608061#comment-16608061
 ] 

Duo Zhang commented on HBASE-21142:
---

The TestClientOperationTimeout also failed in a strange way...

https://builds.apache.org/job/HBase-Flaky-Tests/job/master/503/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestClientOperationTimeout-output.txt/*view*/

{noformat}
2018-09-08 09:16:17,488 INFO  [Time-limited test] hbase.ResourceChecker(148): 
before: TestClientOperationTimeout#testPutTimeout Thread=208, 
OpenFileDescriptor=1083, MaxFileDescriptor=6, SystemLoadAverage=944, 
ProcessCount=328, AvailableMemoryMB=6963
2018-09-08 09:16:17,489 WARN  [Time-limited test] hbase.ResourceChecker(135): 
OpenFileDescriptor=1083 is superior to 1024
2018-09-08 09:16:17,511 INFO  [RS-EventLoopGroup-1-3] 
ipc.ServerRpcConnection(556): Connection from 67.195.81.139:60496, 
version=3.0.0-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), 
service=MasterService
2018-09-08 09:16:17,532 INFO  
[RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=42070] 
master.HMaster$3(1917): Client=jenkins//67.195.81.139 create 'testPutTimeout', 
{NAME => 'family', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', 
NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', 
CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 
'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 
'NONE', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false', 
CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', 
COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
2018-09-08 09:16:18,046 INFO  [Time-limited test] hbase.ResourceChecker(172): 
after: TestClientOperationTimeout#testPutTimeout Thread=209 (was 208)
{noformat}

All the test methods are finished very soon but later we are still trying to 
create the test tables!

{noformat}
2018-09-08 09:16:18,893 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=4,queue=0,port=42070] 
procedure2.ProcedureExecutor(1117): Stored pid=9, 
state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure 
table=testPutTimeout
2018-09-08 09:16:19,119 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=42070] 
procedure2.ProcedureExecutor(1117): Stored pid=11, 
state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure 
table=testScanTimeout
2018-09-08 09:16:19,119 DEBUG 
[RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=42070] 
procedure2.ProcedureExecutor(1117): Stored pid=10, 
state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, hasLock=false; CreateTableProcedure 
table=testGetTimeout
{noformat}

> ReopenTableRegionsProcedure sometimes hangs
> ---
>
> Key: HBASE-21142
> URL: https://issues.apache.org/jira/browse/HBASE-21142
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Priority: Major
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/364/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.replication.TestSyncReplicationMoreLogsInLocalGiveUpSplitting-output.txt/*view*/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0

2018-09-08 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608066#comment-16608066
 ] 

Guangxu Cheng commented on HBASE-21001:
---

bq. Nit: we don't need to call region.close explicitly in UT after HBASE-21138.
Thank [~liuml07]. I didn't notice it before. Rereview TestHRegion again, some 
test methods still have the duplicate close. Maybe we can create a new issue to 
fix it.

> ReplicationObserver fails to load in HBase 2.0.0
> 
>
> Key: HBASE-21001
> URL: https://issues.apache.org/jira/browse/HBASE-21001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 2.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: replication
> Attachments: HBASE-21001.branch-2.0.001.patch, 
> HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, 
> HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, 
> HBASE-21001.master.004.patch
>
>
> ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of 
> data for replication of bulk loaded hfiles".
> I tried to enable bulk loading replication feature 
> (hbase.replication.bulkload.enabled=true and configure 
> hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer 
> started with the following error:
> {quote}
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 ERROR 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not 
> of type
> RegionServerCoprocessor. Check the configuration of 
> hbase.coprocessor.regionserver.classes
> 2018-08-02 18:20:36,366 ERROR 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor 
> ReplicationObserver
> {quote}
> It looks like this was broken by HBASE-17732 to me, but I could be wrong. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-08 Thread Guangxu Cheng (JIRA)
Guangxu Cheng created HBASE-21173:
-

 Summary: Remove the duplicate HRegion#close in TestHRegion
 Key: HBASE-21173
 URL: https://issues.apache.org/jira/browse/HBASE-21173
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0, 2.2.0
Reporter: Guangxu Cheng
Assignee: Guangxu Cheng


 After HBASE-21138, some test methods still have the duplicate HRegion#close.So 
open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0

2018-09-08 Thread Guangxu Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608068#comment-16608068
 ] 

Guangxu Cheng commented on HBASE-21001:
---

Pushed to branch-2.0 and create new issue HBASE-21173 to remove duplicate 
close.Thank [~yuzhih...@gmail.com] [~jojochuang] [~liuml07] for review.

> ReplicationObserver fails to load in HBase 2.0.0
> 
>
> Key: HBASE-21001
> URL: https://issues.apache.org/jira/browse/HBASE-21001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 2.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: replication
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21001.branch-2.0.001.patch, 
> HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, 
> HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, 
> HBASE-21001.master.004.patch
>
>
> ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of 
> data for replication of bulk loaded hfiles".
> I tried to enable bulk loading replication feature 
> (hbase.replication.bulkload.enabled=true and configure 
> hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer 
> started with the following error:
> {quote}
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 ERROR 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not 
> of type
> RegionServerCoprocessor. Check the configuration of 
> hbase.coprocessor.regionserver.classes
> 2018-08-02 18:20:36,366 ERROR 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor 
> ReplicationObserver
> {quote}
> It looks like this was broken by HBASE-17732 to me, but I could be wrong. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0

2018-09-08 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21001:
--
   Resolution: Fixed
Fix Version/s: 2.0.3
   2.1.1
   2.2.0
   3.0.0
   Status: Resolved  (was: Patch Available)

> ReplicationObserver fails to load in HBase 2.0.0
> 
>
> Key: HBASE-21001
> URL: https://issues.apache.org/jira/browse/HBASE-21001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 2.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: replication
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21001.branch-2.0.001.patch, 
> HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, 
> HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, 
> HBASE-21001.master.004.patch
>
>
> ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of 
> data for replication of bulk loaded hfiles".
> I tried to enable bulk loading replication feature 
> (hbase.replication.bulkload.enabled=true and configure 
> hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer 
> started with the following error:
> {quote}
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 ERROR 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not 
> of type
> RegionServerCoprocessor. Check the configuration of 
> hbase.coprocessor.regionserver.classes
> 2018-08-02 18:20:36,366 ERROR 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor 
> ReplicationObserver
> {quote}
> It looks like this was broken by HBASE-17732 to me, but I could be wrong. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608078#comment-16608078
 ] 

Hadoop QA commented on HBASE-16458:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}122m 36s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
10s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.replication.TestReplicationDroppedTables |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-16458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938948/16458.v4.txt |
| Optional Tests |  asflicense  javac  javadoc  u

[jira] [Updated] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16458:
---
Attachment: 16458.v5.txt

> Shorten backup / restore test execution time
> 
>
> Key: HBASE-16458
> URL: https://issues.apache.org/jira/browse/HBASE-16458
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: backup
> Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, 
> 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 
> 16458.v2.txt, 16458.v3.txt, 16458.v4.txt, 16458.v5.txt, HBASE-16458-v1.patch, 
> HBASE-16458-v2.patch
>
>
> Below was timing information for all the backup / restore tests (today's 
> result):
> {code}
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Running org.apache.hadoop.hbase.backup.TestBackupAdmin
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupAdmin
> Running org.apache.hadoop.hbase.backup.TestHFileArchiving
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - 
> in org.apache.hadoop.hbase.backup.TestHFileArchiving
> Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - 
> in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Running org.apache.hadoop.hbase.backup.TestBackupDescribe
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDescribe
> Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Running org.apache.hadoop.hbase.backup.TestRemoteBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteBackup
> Running org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - 
> in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Running org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Running org.apache.hadoop.hbase.backup.TestRemoteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteRestore
> Running org.apache.hadoop.hbase.backup.TestFullRestore
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec 
> - in org.apache.hadoop.hbase.backup.TestFullRestore
> Running org.apache.hadoop.hbase.backup.TestFullBackupSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSet
> Running org.apache.hadoop.hbase.backup.TestBackupDelete
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDelete
> Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.631 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 190.358 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupDeleteTable
> Runni

[jira] [Updated] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-08 Thread Guangxu Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-21173:
--
Attachment: HBASE-21173.master.001.patch
Status: Patch Available  (was: Open)

[~yuzhih...@gmail.com] [~liuml07] mind taking a look at it ?Thanks

> Remove the duplicate HRegion#close in TestHRegion
> -
>
> Key: HBASE-21173
> URL: https://issues.apache.org/jira/browse/HBASE-21173
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21173.master.001.patch
>
>
>  After HBASE-21138, some test methods still have the duplicate 
> HRegion#close.So open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-21052:
-
Attachment: HBASE-21052.master.003.patch

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.ja

[jira] [Commented] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608119#comment-16608119
 ] 

Ted Yu commented on HBASE-21173:


{code}
 region.close();
 assertEquals(max, region.getMaxFlushedSeqId());
+region = null;
{code}
I think the intention of HBASE-21138 is to let 
HBaseTestingUtility.closeRegionAndWAL do the cleanup.

Can you remove the duplicate region.close() call in these subtests ?

Thanks

> Remove the duplicate HRegion#close in TestHRegion
> -
>
> Key: HBASE-21173
> URL: https://issues.apache.org/jira/browse/HBASE-21173
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21173.master.001.patch
>
>
>  After HBASE-21138, some test methods still have the duplicate 
> HRegion#close.So open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608127#comment-16608127
 ] 

Hadoop QA commented on HBASE-21172:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} hbase-procedure: The patch generated 1 new + 1 
unchanged - 2 fixed = 2 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
21s{color} | {color:red} hbase-server: The patch generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}123m 
11s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21172 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938954/HBASE-21172.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux a69617ab9331 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/worksp

[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608136#comment-16608136
 ] 

Hadoop QA commented on HBASE-16458:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.7.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
12s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
12m  1s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m  
3s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-16458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938959/16458.v5.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  findbugs  hbaseanti  checkstyle  |
| uname | Linux 7f12ffd8f189 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b04b4b0fd1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-R

[jira] [Commented] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-08 Thread Mingliang Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608149#comment-16608149
 ] 

Mingliang Liu commented on HBASE-21173:
---

 Thanks for working on this JIRA, [~andrewcheng]. In previous discussion, I 
thought calling {{HTU.closeRegionAndWAL()}} on closed/null region is no harm, 
while deleting the duplicate close may make some tests unhappy. So we were not 
very strict to make region close only once. Good discussion to revisit this.
 - The last one in {{testBulkLoadReplicationEnabled()}} was added after 
HBASE-21138 and can be removed here.
 - The {{HTU.closeRegionAndWAL()}} in {{testRegionInfoFileCreation()}} was 
followed by the assertion to verify that the .regioninfo file is still there. I 
saw the close-and-assertion happens in the same test method multiple times so I 
was not sure we could remove the close here.
 - In {{testSequenceId}} and {{testCloseCarryingSnapshot}}, the pattern in this 
patch, i.e. "{{region.close() && region = null}}", is not correct. The reason 
is that, it makes the {{HTU.closeRegionAndWAL()}} in {{teardown()}} a no-op, 
leaving WAL not closed. One fix is to not set the null value and leave the test 
as-is; a better one I think is as [~yuzhih...@gmail.com] suggested, we can 
replace the {{region.close()}} with {{HTU.closeRegionAndWAL()}}, and set 
{{this.region}} null value.
 - Other places to set {{this.region}} null value after 
{{HTU.closeRegionAndWAL()}} is good to explicitly make the 
{{HTU.closeRegionAndWAL()}} in {{tearDown}} a no-op.

> Remove the duplicate HRegion#close in TestHRegion
> -
>
> Key: HBASE-21173
> URL: https://issues.apache.org/jira/browse/HBASE-21173
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Attachments: HBASE-21173.master.001.patch
>
>
>  After HBASE-21138, some test methods still have the duplicate 
> HRegion#close.So open this issue to remove the duplicate close



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Vladimir Rodionov (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608164#comment-16608164
 ] 

Vladimir Rodionov commented on HBASE-16458:
---

[~te...@apache.org], you did not need to install shutdown hook. tearDown works 
the same way. I removed tearDown because there is no need to clean up after 
test, as since each test is executed in a separate JVM instance. This actually 
saved almost 30% of overall execution time. 

> Shorten backup / restore test execution time
> 
>
> Key: HBASE-16458
> URL: https://issues.apache.org/jira/browse/HBASE-16458
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: backup
> Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, 
> 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 
> 16458.v2.txt, 16458.v3.txt, 16458.v4.txt, 16458.v5.txt, HBASE-16458-v1.patch, 
> HBASE-16458-v2.patch
>
>
> Below was timing information for all the backup / restore tests (today's 
> result):
> {code}
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Running org.apache.hadoop.hbase.backup.TestBackupAdmin
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupAdmin
> Running org.apache.hadoop.hbase.backup.TestHFileArchiving
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - 
> in org.apache.hadoop.hbase.backup.TestHFileArchiving
> Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - 
> in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Running org.apache.hadoop.hbase.backup.TestBackupDescribe
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDescribe
> Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Running org.apache.hadoop.hbase.backup.TestRemoteBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteBackup
> Running org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - 
> in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Running org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Running org.apache.hadoop.hbase.backup.TestRemoteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteRestore
> Running org.apache.hadoop.hbase.backup.TestFullRestore
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec 
> - in org.apache.hadoop.hbase.backup.TestFullRestore
> Running org.apache.hadoop.hbase.backup.TestFullBackupSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSet
> Running org.apache.hadoop.hbase.backup.TestBackupDelete
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDelete
> Running org.apache.hadoop.hbase.backup.TestBackupDeleteRestore
> Tests run: 1, Failures: 0, Errors: 0

[jira] [Updated] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone

2018-09-08 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21171:
--
Attachment: HBASE-21171.branch-2.1.002.patch

> [amv2] Tool to parse a directory of MasterProcWALs standalone
> -
>
> Key: HBASE-21171
> URL: https://issues.apache.org/jira/browse/HBASE-21171
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-21171.branch-2.1.001.patch, 
> HBASE-21171.branch-2.1.002.patch
>
>
> I want to be able to test parsing and be able to profile a standalone parse 
> and WALProcedureStore load of procedures. Adding a simple main on 
> WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds 
> of WALs to see what is going on when we try to load. Good for figuring how to 
> log, where the memory is going, etc., in this subsystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21173) Remove the duplicate HRegion#close in TestHRegion

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608181#comment-16608181
 ] 

Hadoop QA commented on HBASE-21173:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
15s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
18s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 51s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}134m  
3s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}176m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21173 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938960/HBASE-21173.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 4a8a24e84861 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / b04b4b0fd1 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14365/testReport/ |
| Max. process+thread count | 4797 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14365/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Remove the dup

[jira] [Commented] (HBASE-16458) Shorten backup / restore test execution time

2018-09-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608182#comment-16608182
 ] 

Ted Yu commented on HBASE-16458:


>From the test console, you would see:
{code}
17:01:53 |  +1  |   unit  |  13m  3s   | hbase-backup in the patch 
passed. 
{code}
There is no slowdown by tearing down cluster thru shutdown hook.

We should perform mini cluster shutdown to remove intermediate files generated 
during test runs.
Otherwise there is chance that such files stay on the Jenkins machine(s).

> Shorten backup / restore test execution time
> 
>
> Key: HBASE-16458
> URL: https://issues.apache.org/jira/browse/HBASE-16458
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: backup
> Attachments: 16458-v1.patch, 16458.HBASE-7912.v3.txt, 
> 16458.HBASE-7912.v4.txt, 16458.HBASE-7912.v5.txt, 16458.v1.txt, 16458.v2.txt, 
> 16458.v2.txt, 16458.v3.txt, 16458.v4.txt, 16458.v5.txt, HBASE-16458-v1.patch, 
> HBASE-16458-v2.patch
>
>
> Below was timing information for all the backup / restore tests (today's 
> result):
> {code}
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 576.273 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackup
> Running org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.67 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 102.34 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupStatusProgress
> Running org.apache.hadoop.hbase.backup.TestBackupAdmin
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 490.251 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupAdmin
> Running org.apache.hadoop.hbase.backup.TestHFileArchiving
> Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.323 sec - 
> in org.apache.hadoop.hbase.backup.TestHFileArchiving
> Running org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.492 sec - 
> in org.apache.hadoop.hbase.backup.TestSystemTableSnapshot
> Running org.apache.hadoop.hbase.backup.TestBackupDescribe
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.758 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDescribe
> Running org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.187 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupLogCleaner
> Running org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 330.539 sec - 
> in org.apache.hadoop.hbase.backup.TestIncrementalBackupNoDataLoss
> Running org.apache.hadoop.hbase.backup.TestRemoteBackup
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.371 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteBackup
> Running org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.893 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupSystemTable
> Running org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.779 sec - 
> in org.apache.hadoop.hbase.backup.TestRestoreBoundaryTests
> Running org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.815 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSetRestoreSet
> Running org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.517 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupShowHistory
> Running org.apache.hadoop.hbase.backup.TestRemoteRestore
> Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.799 sec - 
> in org.apache.hadoop.hbase.backup.TestRemoteRestore
> Running org.apache.hadoop.hbase.backup.TestFullRestore
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 317.711 sec 
> - in org.apache.hadoop.hbase.backup.TestFullRestore
> Running org.apache.hadoop.hbase.backup.TestFullBackupSet
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 87.045 sec - 
> in org.apache.hadoop.hbase.backup.TestFullBackupSet
> Running org.apache.hadoop.hbase.backup.TestBackupDelete
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.214 sec - 
> in org.apache.hadoop.hbase.backup.TestBackupDelete
> Running org.apache.hadoop.hba

[jira] [Commented] (HBASE-21035) Meta Table should be able to online even if all procedures are lost

2018-09-08 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608185#comment-16608185
 ] 

stack commented on HBASE-21035:
---

I'm back. 

My master is aborting after spending fours reconstructing the assignment state 
from the reading of 300+ WALs. Master then becomes active. In background there 
are procedures running and finishing... mostly SCPs.

Then my master is dying with:

2018-09-07 22:21:58,968 ERROR org.apache.hadoop.hbase.master.HMaster: * 
ABORTING master vc0207.halxg.cloudera.com,22001,1536380265734: Unhandled 
exception. Starting shutdown. *
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
attempts=31, exceptions:
Fri Sep 07 22:21:58 PDT 2018, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=69864: 
org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online 
on vd0412.halxg.cloudera.com,22101,1536380043533

i.e. meta is not online even though the above server's SCP has completed.

This is a dirty install so what was up in zk for meta location could be long 
stale but here we have a state where no SCP's running and meta is not 
online.

I need to write the tool to insert a meta assign. It just takes 4 or 5 hours 
before I know if it is the fix for this problem. And then there is the scan of 
the hbase:namespace table next.

Thinking of waiting on all SCPs to finish before we do our first meta scan  
and if meta is still not online, then, auto-schedule the restore meta procedure 
... splitting meta logs inline and then assigning meta. Would this violate your 
principal [~Apache9]?

In other words, I need to write the restore meta procedure -- it would split 
meta logs and then do the assign of meta -- but I think we should auto-schedule 
it in the case above.

> Meta Table should be able to online even if all procedures are lost
> ---
>
> Key: HBASE-21035
> URL: https://issues.apache.org/jira/browse/HBASE-21035
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21035.branch-2.0.001.patch
>
>
> After HBASE-20708, we changed the way we init after master starts. It will 
> only check WAL dirs and compare to Zookeeper RS nodes to decide which server 
> need to expire. For servers which's dir is ending with 'SPLITTING', we assure 
> that there will be a SCP for it.
> But, if the server with the meta region crashed before master restarts, and 
> if all the procedure wals are lost (due to bug, or deleted manually, 
> whatever), the new restarted master will be stuck when initing. Since no one 
> will bring meta region online.
> Although it is an anomaly case, but I think no matter what happens, we need 
> to online meta region. Otherwise, we are sitting ducks, noting can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608190#comment-16608190
 ] 

Hadoop QA commented on HBASE-21052:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} The patch hbase-client passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} hbase-server: The patch generated 0 new + 0 
unchanged - 2 fixed = 0 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
29s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
58s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}138m 58s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}191m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.procedure.TestDisableTableProcedure |
|   | hadoop.hbase.ipc.TestMasterFifoRpcScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21052 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938961/HBASE-21052.master.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b7b4cef1a7a0 3.13.0-139-generic #188-U

[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608191#comment-16608191
 ] 

Ted Yu commented on HBASE-21052:


lgtm

Test failure was not related.

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHand

[jira] [Commented] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608192#comment-16608192
 ] 

Hadoop QA commented on HBASE-21171:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
40s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
59s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
59s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-21171 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938965/HBASE-21171.branch-2.1.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 1f3a3e049200 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.1 / f85fba4a54 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14367/testReport/ |
| Max. process+thread count | 275 (vs. ulimit of 1) |
| modules | C: hbase-procedure U: hbase-procedure |
| Console output | 
https://buil

[jira] [Commented] (HBASE-21001) ReplicationObserver fails to load in HBase 2.0.0

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608231#comment-16608231
 ] 

Hudson commented on HBASE-21001:


Results for branch branch-2.0
[build #787 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/787/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/787//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/787//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/787//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> ReplicationObserver fails to load in HBase 2.0.0
> 
>
> Key: HBASE-21001
> URL: https://issues.apache.org/jira/browse/HBASE-21001
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-4, 2.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Guangxu Cheng
>Priority: Major
>  Labels: replication
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21001.branch-2.0.001.patch, 
> HBASE-21001.master.001.patch, HBASE-21001.master.001.patch, 
> HBASE-21001.master.002.patch, HBASE-21001.master.003.patch, 
> HBASE-21001.master.004.patch
>
>
> ReplicationObserver was added in HBASE-17290 to prevent "Potential loss of 
> data for replication of bulk loaded hfiles".
> I tried to enable bulk loading replication feature 
> (hbase.replication.bulkload.enabled=true and configure 
> hbase.replication.cluster.id) on a HBase 2.0.0 cluster, but the RegionServer 
> started with the following error:
> {quote}
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: System 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 INFO 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: Table 
> coprocessor loading is enabled
> 2018-08-02 18:20:36,365 ERROR 
> org.apache.hadoop.hbase.regionserver.RegionServerCoprocessorHost: 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver is not 
> of type
> RegionServerCoprocessor. Check the configuration of 
> hbase.coprocessor.regionserver.classes
> 2018-08-02 18:20:36,366 ERROR 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost: Cannot load coprocessor 
> ReplicationObserver
> {quote}
> It looks like this was broken by HBASE-17732 to me, but I could be wrong. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone

2018-09-08 Thread Mike Drob (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608253#comment-16608253
 ] 

Mike Drob commented on HBASE-21171:
---

bq. +  ProcedureExecutor pe = new ProcedureExecutor(conf, new 
BareBonesEnv(), store);
If you  just need some object that doesn't actually get used, you could do 
{{new Object()}}, no?

> [amv2] Tool to parse a directory of MasterProcWALs standalone
> -
>
> Key: HBASE-21171
> URL: https://issues.apache.org/jira/browse/HBASE-21171
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-21171.branch-2.1.001.patch, 
> HBASE-21171.branch-2.1.002.patch
>
>
> I want to be able to test parsing and be able to profile a standalone parse 
> and WALProcedureStore load of procedures. Adding a simple main on 
> WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds 
> of WALs to see what is going on when we try to load. Good for figuring how to 
> log, where the memory is going, etc., in this subsystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21035) Meta Table should be able to online even if all procedures are lost

2018-09-08 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608254#comment-16608254
 ] 

Duo Zhang commented on HBASE-21035:
---

If the cluster is not in a good state and several RSes keep crashing, the 
master will hang there forever, if you need to wait until all SCPs to 
finish...And how do you determine the meta is on a stale server programmingly? 

And my concern is that, there will be races, as server crash can happen at any 
time, include the logic in the start up code path will make the code flaky...

We can have a tool to do something like this, But when to use it should be 
decided by human. Maybe we could do more checks and print something in log that 
the state seems not correct, please check XXX and XXX to see if there are 
something wrong and try XXX tool?

> Meta Table should be able to online even if all procedures are lost
> ---
>
> Key: HBASE-21035
> URL: https://issues.apache.org/jira/browse/HBASE-21035
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21035.branch-2.0.001.patch
>
>
> After HBASE-20708, we changed the way we init after master starts. It will 
> only check WAL dirs and compare to Zookeeper RS nodes to decide which server 
> need to expire. For servers which's dir is ending with 'SPLITTING', we assure 
> that there will be a SCP for it.
> But, if the server with the meta region crashed before master restarts, and 
> if all the procedure wals are lost (due to bug, or deleted manually, 
> whatever), the new restarted master will be stuck when initing. Since no one 
> will bring meta region online.
> Although it is an anomaly case, but I think no matter what happens, we need 
> to online meta region. Otherwise, we are sitting ducks, noting can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608267#comment-16608267
 ] 

Toshihiro Suzuki commented on HBASE-21052:
--

Thank you for reviewing [~yuzhih...@gmail.com]. Let me commit.

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 

[jira] [Updated] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-21052:
-
Fix Version/s: 2.2.0
   3.0.0

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handl

[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Toshihiro Suzuki (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608268#comment-16608268
 ] 

Toshihiro Suzuki commented on HBASE-21052:
--

Pushed to master and branch-2

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   

[jira] [Updated] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Toshihiro Suzuki (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-21052:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclip

[jira] [Commented] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone

2018-09-08 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608284#comment-16608284
 ] 

stack commented on HBASE-21171:
---

Let me do it.

> [amv2] Tool to parse a directory of MasterProcWALs standalone
> -
>
> Key: HBASE-21171
> URL: https://issues.apache.org/jira/browse/HBASE-21171
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-21171.branch-2.1.001.patch, 
> HBASE-21171.branch-2.1.002.patch
>
>
> I want to be able to test parsing and be able to profile a standalone parse 
> and WALProcedureStore load of procedures. Adding a simple main on 
> WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds 
> of WALs to see what is going on when we try to load. Good for figuring how to 
> log, where the memory is going, etc., in this subsystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21035) Meta Table should be able to online even if all procedures are lost

2018-09-08 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608285#comment-16608285
 ] 

stack commented on HBASE-21035:
---

Let me do what you suggest first. 

> Meta Table should be able to online even if all procedures are lost
> ---
>
> Key: HBASE-21035
> URL: https://issues.apache.org/jira/browse/HBASE-21035
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21035.branch-2.0.001.patch
>
>
> After HBASE-20708, we changed the way we init after master starts. It will 
> only check WAL dirs and compare to Zookeeper RS nodes to decide which server 
> need to expire. For servers which's dir is ending with 'SPLITTING', we assure 
> that there will be a SCP for it.
> But, if the server with the meta region crashed before master restarts, and 
> if all the procedure wals are lost (due to bug, or deleted manually, 
> whatever), the new restarted master will be stuck when initing. Since no one 
> will bring meta region online.
> Although it is an anomaly case, but I think no matter what happens, we need 
> to online meta region. Otherwise, we are sitting ducks, noting can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone

2018-09-08 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-21171:
--
Attachment: HBASE-21171.branch-2.1.003.patch

> [amv2] Tool to parse a directory of MasterProcWALs standalone
> -
>
> Key: HBASE-21171
> URL: https://issues.apache.org/jira/browse/HBASE-21171
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-21171.branch-2.1.001.patch, 
> HBASE-21171.branch-2.1.002.patch, HBASE-21171.branch-2.1.003.patch
>
>
> I want to be able to test parsing and be able to profile a standalone parse 
> and WALProcedureStore load of procedures. Adding a simple main on 
> WALProcedureStore seems to be enough. I tested parsing it a dir of hundreds 
> of WALs to see what is going on when we try to load. Good for figuring how to 
> log, where the memory is going, etc., in this subsystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21171) [amv2] Tool to parse a directory of MasterProcWALs standalone

2018-09-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608289#comment-16608289
 ] 

Hadoop QA commented on HBASE-21171:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
43s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
38s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 59s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-21171 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12938978/HBASE-21171.branch-2.1.003.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 932869f4a294 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.1 / f85fba4a54 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/14368/testReport/ |
| Max. process+thread count | 267 (vs. ulimit of 1) |
| modules | C: hbase-procedure U: hbase-procedure |
| Console output | 
https

[jira] [Commented] (HBASE-21035) Meta Table should be able to online even if all procedures are lost

2018-09-08 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608297#comment-16608297
 ] 

Duo Zhang commented on HBASE-21035:
---

I think one thing we could do is that, collect the WAL directory which ends 
with the splitting suffix, and check whether there’s a SCP associated with it, 
just like what [~allan163] has done here. But instead of scheduling a SCP 
directly, we just log it and tell the operators that something may go wrong.

> Meta Table should be able to online even if all procedures are lost
> ---
>
> Key: HBASE-21035
> URL: https://issues.apache.org/jira/browse/HBASE-21035
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Attachments: HBASE-21035.branch-2.0.001.patch
>
>
> After HBASE-20708, we changed the way we init after master starts. It will 
> only check WAL dirs and compare to Zookeeper RS nodes to decide which server 
> need to expire. For servers which's dir is ending with 'SPLITTING', we assure 
> that there will be a SCP for it.
> But, if the server with the meta region crashed before master restarts, and 
> if all the procedure wals are lost (due to bug, or deleted manually, 
> whatever), the new restarted master will be stuck when initing. Since no one 
> will bring meta region online.
> Although it is an anomaly case, but I think no matter what happens, we need 
> to online meta region. Otherwise, we are sitting ducks, noting can be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21172) Reimplement the retry backoff logic for ReopenTableRegionsProcedure

2018-09-08 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21172:
--
Attachment: HBASE-21172-v1.patch

> Reimplement the retry backoff logic for ReopenTableRegionsProcedure
> ---
>
> Key: HBASE-21172
> URL: https://issues.apache.org/jira/browse/HBASE-21172
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2, proc-v2
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21172-v1.patch, HBASE-21172.patch
>
>
> Now we just do a blocking sleep in the execute method, and there is no 
> exponential backoff.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21144) AssignmentManager.waitForAssignment is not stable

2018-09-08 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608318#comment-16608318
 ] 

Duo Zhang commented on HBASE-21144:
---

Seems worked. Let me commit to other branches.

> AssignmentManager.waitForAssignment is not stable
> -
>
> Key: HBASE-21144
> URL: https://issues.apache.org/jira/browse/HBASE-21144
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, test
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>
> Attachments: HBASE-21144-addendum.patch, HBASE-21144-v1.patch, 
> HBASE-21144.patch
>
>
> https://builds.apache.org/job/HBase-Flaky-Tests/job/master/366/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestMetaWithReplicas-output.txt/*view*/
> All replicas for meta table are on the same machine
> {noformat}
> 2018-09-02 19:49:05,486 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1.1588230740 on 
> asf904.gq1.ygridcore.net,47561,1535917740998
> 2018-09-02 19:49:32,802 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0001.534574363 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> 2018-09-02 19:49:33,496 DEBUG [RS_OPEN_META-regionserver/asf904:0-0] 
> handler.OpenRegionHandler(127): Opened hbase:meta,,1_0002.1657623790 on 
> asf904.gq1.ygridcore.net,55408,1535917768453
> {noformat}
> But after calling am.waitForAssignment, the region location is still null...
> {noformat}
> 2018-09-02 19:49:32,414 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0001.534574363 on null
> 2018-09-02 19:49:32,844 INFO  [Time-limited test] 
> client.TestMetaWithReplicas(113): HBASE:META DEPLOY: 
> hbase:meta,,1_0002.1657623790 on null
> {noformat}
> So we will not balance the replicas and cause TestMetaWithReplicas to hang 
> forever...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-16594) ROW_INDEX_V2 DBE

2018-09-08 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-16594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608319#comment-16608319
 ] 

Lars Hofhansl commented on HBASE-16594:
---

Did this get abandoned?

> ROW_INDEX_V2 DBE
> 
>
> Key: HBASE-16594
> URL: https://issues.apache.org/jira/browse/HBASE-16594
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: binlijin
>Assignee: binlijin
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: HBASE-16594-master_v1.patch, HBASE-16594-master_v2.patch
>
>
> See HBASE-16213, ROW_INDEX_V1 DataBlockEncoding.
> ROW_INDEX_V1 is the first version which have no storage optimization, 
> ROW_INDEX_V2 do storage optimization: store every row only once, store column 
> family only once in a HFileBlock.
> ROW_INDEX_V1 is : 
> /** 
>  * Store cells following every row's start offset, so we can binary search to 
> a row's cells. 
>  * 
>  * Format: 
>  * flat cells 
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * integer: dataSize 
>  * 
> */
> ROW_INDEX_V2 is :
>  * row1 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  * row2 qualifier timestamp type value tag
>  * row3 qualifier timestamp type value tag
>  *  qualifier timestamp type value tag
>  *  
>  * integer: number of rows 
>  * integer: row0's offset 
>  * integer: row1's offset 
>  *  
>  * column family
>  * integer: dataSize 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21052) After restoring a snapshot, table.jsp page for the table gets stuck

2018-09-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16608357#comment-16608357
 ] 

Hudson commented on HBASE-21052:


Results for branch branch-2
[build #1222 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1222/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1222//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1222//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1222//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> After restoring a snapshot, table.jsp page for the table gets stuck
> ---
>
> Key: HBASE-21052
> URL: https://issues.apache.org/jira/browse/HBASE-21052
> Project: HBase
>  Issue Type: Bug
>  Components: snapshots
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21052.master.001.patch, 
> HBASE-21052.master.002.patch, HBASE-21052.master.003.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create "test", "cf"
> {code}
> 2. Take a hbase snapshot for the table
> {code}
> snapshot "test", "snap"
> {code}
> 3. Disable the table
> {code}
> disable "test"
> {code}
> 4. Restore the hbase snapshot
> {code}
> restore_snapshot "snap"
> {code}
> 5. Open the table.jsp page for the table in a browser, but it gets stuck
> {code}
> http://:16010/table.jsp?name=test
> {code}
> According to the following thread dump, it looks like 
> ConnectionImplementation.locateRegionInMeta() gets stuck when getting a 
> compaction state.
> {code}
> "qtp2068100669-89" #89 daemon prio=5 os_prio=31 tid=0x7febac55b800 
> nid=0xf403 waiting on condition [0x762b7000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:933)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:738)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegions(ConnectionImplementation.java:694)
> at 
> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegions(ConnectionUtils.java:131)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:3336)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getCompactionState(HBaseAdmin.java:2521)
> at 
> org.apache.hadoop.hbase.generated.master.table_jsp._jspService(table_jsp.java:316)
> at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:111)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
> at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:112)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:48)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1374)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
> at 
> org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandl