[jira] [Updated] (HBASE-19866) TestRegionServerReportForDuty doesn't timeout

2018-01-25 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19866:
-
Description: 
So reading around junit docs, looks like the reason is result of these two 
rules:
-  @Test(timeout=X) applies only on the test function, and not on whole test 
fixture (@After, @Before, etc)
- Timeout rule applies on whole test fixture

TestRegionServerReportForDuty just has @Test(timeout=18) and no Timeout 
rule unlike we have in so many other tests.
The test method, in the logs I have, runs in less then 60 sec. So it meets the 
timeout specified in @Test annotation.
However, we get stuck in tearDown, and since there is no Timeout rule, it keeps 
on running until surefire kills the JVM after forkedProcessTimeoutInSeconds 
(set to 900 sec).
Let use the "Timeout" rule instead of {{@Test(timeout=18)}}.

*However, note that this won't solve the root cause of hangup.* It'll just make 
the test fail neatly rather than getting stuck and requiring surefire plugin to 
kill the forked JVMs (see HBASE-19803).

  was:
So reading around junit docs, looks like the reason is result of these two 
rules:
-  @Test(timeout=X) applies only on the test function, and not on whole test 
fixture (@After, @Before, etc)
- Timeout rule applies on whole test fixture

Since 


> TestRegionServerReportForDuty doesn't timeout
> -
>
> Key: HBASE-19866
> URL: https://issues.apache.org/jira/browse/HBASE-19866
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
>
> So reading around junit docs, looks like the reason is result of these two 
> rules:
> -  @Test(timeout=X) applies only on the test function, and not on whole test 
> fixture (@After, @Before, etc)
> - Timeout rule applies on whole test fixture
> TestRegionServerReportForDuty just has @Test(timeout=18) and no Timeout 
> rule unlike we have in so many other tests.
> The test method, in the logs I have, runs in less then 60 sec. So it meets 
> the timeout specified in @Test annotation.
> However, we get stuck in tearDown, and since there is no Timeout rule, it 
> keeps on running until surefire kills the JVM after 
> forkedProcessTimeoutInSeconds (set to 900 sec).
> Let use the "Timeout" rule instead of {{@Test(timeout=18)}}.
> *However, note that this won't solve the root cause of hangup.* It'll just 
> make the test fail neatly rather than getting stuck and requiring surefire 
> plugin to kill the forked JVMs (see HBASE-19803).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19866) TestRegionServerReportForDuty doesn't timeout

2018-01-25 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19866:
-
Description: 
So reading around junit docs, looks like the reason is result of these two 
rules:
-  @Test(timeout=X) applies only on the test function, and not on whole test 
fixture (@After, @Before, etc)
- Timeout rule applies on whole test fixture

Since 

> TestRegionServerReportForDuty doesn't timeout
> -
>
> Key: HBASE-19866
> URL: https://issues.apache.org/jira/browse/HBASE-19866
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
>
> So reading around junit docs, looks like the reason is result of these two 
> rules:
> -  @Test(timeout=X) applies only on the test function, and not on whole test 
> fixture (@After, @Before, etc)
> - Timeout rule applies on whole test fixture
> Since 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19866) TestRegionServerReportForDuty doesn't timeout

2018-01-25 Thread Appy (JIRA)
Appy created HBASE-19866:


 Summary: TestRegionServerReportForDuty doesn't timeout
 Key: HBASE-19866
 URL: https://issues.apache.org/jira/browse/HBASE-19866
 Project: HBase
  Issue Type: Bug
Reporter: Appy
Assignee: Appy






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340652#comment-16340652
 ] 

Appy edited comment on HBASE-19803 at 1/26/18 7:31 AM:
---

Fix for TestTokenAuthentication - HBASE-19862
Looking at TestRegionServerReportForDuty logs, the test was stuck and running 
until surefire killed the JVM. HBASE-19866


was (Author: appy):
Fix for TestTokenAuthentication - HBASE-19862
Looking at TestRegionServerReportForDuty logs, the test was stuck and running 
until surefire killed the JVM.

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340652#comment-16340652
 ] 

Appy commented on HBASE-19803:
--

Fix for TestTokenAuthentication - HBASE-19862
Looking at TestRegionServerReportForDuty logs, the test was stuck and running 
until surefire killed the JVM.

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19840) Flakey TestMetaWithReplicas

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340642#comment-16340642
 ] 

Hadoop QA commented on HBASE-19840:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
25s{color} | {color:red} hbase-server: The patch generated 5 new + 244 
unchanged - 12 fixed = 249 total (was 256) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
21m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}104m 
35s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19840 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907815/HBASE-19840.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b6d4fc08ee91 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / aeffca497b |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org

[jira] [Assigned] (HBASE-19864) Use protobuf instead of enum.ordinal to store SyncReplicationState

2018-01-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-19864:
--

Assignee: Guanghao Zhang

> Use protobuf instead of enum.ordinal to store SyncReplicationState
> --
>
> Key: HBASE-19864
> URL: https://issues.apache.org/jira/browse/HBASE-19864
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19861) Avoid using RPCs when querying table infos for master status pages

2018-01-25 Thread Xiaolin Ha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-19861:
---
Attachment: HBASE-19861.v4.patch

> Avoid using RPCs when querying table infos for master status pages
> --
>
> Key: HBASE-19861
> URL: https://issues.apache.org/jira/browse/HBASE-19861
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-19861.v1.patch, HBASE-19861.v3.patch, 
> HBASE-19861.v4.patch, errorMsgExample.png
>
>
> When querying table information for master status pages, currently method is 
> using admin interfaces. For example, when list user tables, codes are as 
> follows.
> Connection connection = master.getConnection();
> Admin admin = connection.getAdmin();
> try {
>  tables = admin.listTables();
> } finally {
>  admin.close();
> }
> But actually, we can get all user tables from master's memory.
> Using admin interfaces means using RPCs, which has a low efficiency.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19840) Flakey TestMetaWithReplicas

2018-01-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340565#comment-16340565
 ] 

stack commented on HBASE-19840:
---

Retry

> Flakey TestMetaWithReplicas
> ---
>
> Key: HBASE-19840
> URL: https://issues.apache.org/jira/browse/HBASE-19840
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-19840.master.001.patch, 
> HBASE-19840.master.001.patch
>
>
> Failing about 15% of the time..  In testShutdownHandling.. 
> [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests-branch2.0/lastSuccessfulBuild/artifact/dashboard.html]
>  
> Adding some debug. Its hard to follow what is going on in this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19840) Flakey TestMetaWithReplicas

2018-01-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19840:
--
Attachment: HBASE-19840.master.001.patch

> Flakey TestMetaWithReplicas
> ---
>
> Key: HBASE-19840
> URL: https://issues.apache.org/jira/browse/HBASE-19840
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-19840.master.001.patch, 
> HBASE-19840.master.001.patch
>
>
> Failing about 15% of the time..  In testShutdownHandling.. 
> [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests-branch2.0/lastSuccessfulBuild/artifact/dashboard.html]
>  
> Adding some debug. Its hard to follow what is going on in this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19861) Avoid using RPCs when querying table infos for master status pages

2018-01-25 Thread Xiaolin Ha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-19861:
---
Attachment: HBASE-19861.v3.patch

> Avoid using RPCs when querying table infos for master status pages
> --
>
> Key: HBASE-19861
> URL: https://issues.apache.org/jira/browse/HBASE-19861
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-19861.v1.patch, HBASE-19861.v3.patch, 
> errorMsgExample.png
>
>
> When querying table information for master status pages, currently method is 
> using admin interfaces. For example, when list user tables, codes are as 
> follows.
> Connection connection = master.getConnection();
> Admin admin = connection.getAdmin();
> try {
>  tables = admin.listTables();
> } finally {
>  admin.close();
> }
> But actually, we can get all user tables from master's memory.
> Using admin interfaces means using RPCs, which has a low efficiency.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19861) Avoid using RPCs when querying table infos for master status pages

2018-01-25 Thread Xiaolin Ha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-19861:
---
Attachment: errorMsgExample.png

> Avoid using RPCs when querying table infos for master status pages
> --
>
> Key: HBASE-19861
> URL: https://issues.apache.org/jira/browse/HBASE-19861
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-19861.v1.patch, HBASE-19861.v3.patch, 
> errorMsgExample.png
>
>
> When querying table information for master status pages, currently method is 
> using admin interfaces. For example, when list user tables, codes are as 
> follows.
> Connection connection = master.getConnection();
> Admin admin = connection.getAdmin();
> try {
>  tables = admin.listTables();
> } finally {
>  admin.close();
> }
> But actually, we can get all user tables from master's memory.
> Using admin interfaces means using RPCs, which has a low efficiency.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340540#comment-16340540
 ] 

Hadoop QA commented on HBASE-19818:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} hbase-server: The patch generated 0 new + 253 
unchanged - 16 fixed = 253 total (was 269) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
24s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 45s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 
11s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}136m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-19818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907800/HBASE-19818.branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux aeacea73fb11 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / 72f4e98ed1 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11199/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11199/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Scan time limit not work if the filter always filter row key
> 

[jira] [Commented] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340504#comment-16340504
 ] 

Duo Zhang commented on HBASE-19862:
---

+1. This will be pushed to both master and branch-2 right?

> Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of 
> type HasRegionServerServices
> --
>
> Key: HBASE-19862
> URL: https://issues.apache.org/jira/browse/HBASE-19862
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Attachments: HBASE-19862.branch-2.001.patch
>
>
> We have temporary HasRegionServerServices (added in HBASE-19007) and concept 
> of CoreCoprocessors which require that whichever *CoprocessorEnvironment they 
> get, it should also implement HasRegionServerServices.
> This test builds mock RegionCpEnv for TokenProvider (RegionCoprocessor), but 
> it falls short of what's expected and results in following exceptions in test 
> logs
> {noformat}
> 2018-01-25 14:38:54,855 ERROR [TokenServer:d9a9782cd075,39492,1516891133911] 
> helpers.MarkerIgnoringBase(159): Aborting on: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
>   at 
> org.apache.hadoop.hbase.security.token.TokenProvider.start(TokenProvider.java:70)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.initialize(TestTokenAuthentication.java:275)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.run(TestTokenAuthentication.java:347)
> {noformat}
> Patch adds the missing interface to the mock. Also, uses Mockito to mock the 
> interfaces rather the crude way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340502#comment-16340502
 ] 

Appy commented on HBASE-19803:
--

Chatted with him, Duo was talking about CategoryBasedTimeout. Updated 
explanation above to cover that case.

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340483#comment-16340483
 ] 

Appy edited comment on HBASE-19803 at 1/26/18 2:55 AM:
---

I think i have cracked it, it's basically this:
{color:red}Edit{color}: Adding more details.
- Some test goes bad (which one? - basically what we were trying to figure out) 
in a way that the jvm dies (Hence our category based timeout is useless here)
- Since the jvm died without reporting to surefire, the plugin's main process 
keeps waiting
- After 900 sec (forkedProcessTimeoutInSeconds), the plugin issues 'shutdown' 
to *all* forked JVMs (that's what we see *.dump files), basically killing every 
test running at that time.  (IDK why all jvm, it's just what am observing)
** Side note: if findHangingTests.py reports X number of hanging tests, and we 
have surefire forkcount as Y, then there should be *at least* ceiling[X/Y] 
count of the following message in each *.dump file.
{noformat}
# Created on 2018-01-25T12:14:47.114
Killing self fork JVM. Received SHUTDOWN command from Maven shutdown hook.
{noformat}

With that figured, it was easy to find culprit tests.
Look for timestamp of "Killed self fork..." messages in dump file and find the 
tests which started *exactly 900 sec before it* Any hanging test (as reported 
by our script) with start timestamp between these two times was just caught in 
cross fire.

Applying the method to #207 run 
(https://builds.apache.org/job/HBase%20Nightly/job/master/207/) will reveal 
these three culprit tests:
- security.token.TestTokenAuthentication
- master.balancer.TestStochasticLoadBalancer
- regionserver.TestRegionServerReportForDuty


was (Author: appy):
I think i have cracked it, it's basically this:
- Some test goes bad (which one? - basically what we were trying to figure out)
- After 900 sec (forkedProcessTimeoutInSeconds), surefire plugin issues 
'shutdown' to *all* forked JVMs (that's what we see *.dump files), basically 
killing every test running at that time.
** Side note: if findHangingTests.py reports X number of hanging tests, and we 
have surefire forkcount as Y, then there should be *at least* ceiling[X/Y] 
count of the following message in each *.dump file.
{noformat}
# Created on 2018-01-25T12:14:47.114
Killing self fork JVM. Received SHUTDOWN command from Maven shutdown hook.
{noformat}

With that figured, it was easy to find culprit tests.
Look for timestamp of "Killed self fork..." messages in dump file and find the 
tests which started *exactly 900 sec before it* Any hanging test (as reported 
by our script) with start timestamp between these two times was just caught in 
cross fire.

Applying the method to #207 run 
(https://builds.apache.org/job/HBase%20Nightly/job/master/207/) will reveal 
these three culprit tests:
- security.token.TestTokenAuthentication
- master.balancer.TestStochasticLoadBalancer
- regionserver.TestRegionServerReportForDuty

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340490#comment-16340490
 ] 

Appy commented on HBASE-19803:
--

Yes, but we need that timeout, no? You have a way around?

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340489#comment-16340489
 ] 

Duo Zhang commented on HBASE-19803:
---

The timeout limit from junit does not work? Damn...

Thanks [~appy]. Let's fix these three UTs first.

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340485#comment-16340485
 ] 

Duo Zhang commented on HBASE-19803:
---

OK, so the problem is, we have a 15 minutes timeout, if there is a test that 
hangs longer than this period, the surefire plugin will try to kill all the 
ongoing tests and report a failure to us?

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340484#comment-16340484
 ] 

Appy commented on HBASE-19862:
--

Will cleanup checkstyles on commit.
Ping [~zghaobac] since you reviewed the related change too.

> Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of 
> type HasRegionServerServices
> --
>
> Key: HBASE-19862
> URL: https://issues.apache.org/jira/browse/HBASE-19862
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Attachments: HBASE-19862.branch-2.001.patch
>
>
> We have temporary HasRegionServerServices (added in HBASE-19007) and concept 
> of CoreCoprocessors which require that whichever *CoprocessorEnvironment they 
> get, it should also implement HasRegionServerServices.
> This test builds mock RegionCpEnv for TokenProvider (RegionCoprocessor), but 
> it falls short of what's expected and results in following exceptions in test 
> logs
> {noformat}
> 2018-01-25 14:38:54,855 ERROR [TokenServer:d9a9782cd075,39492,1516891133911] 
> helpers.MarkerIgnoringBase(159): Aborting on: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
>   at 
> org.apache.hadoop.hbase.security.token.TokenProvider.start(TokenProvider.java:70)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.initialize(TestTokenAuthentication.java:275)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.run(TestTokenAuthentication.java:347)
> {noformat}
> Patch adds the missing interface to the mock. Also, uses Mockito to mock the 
> interfaces rather the crude way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340483#comment-16340483
 ] 

Appy commented on HBASE-19803:
--

I think i have cracked it, it's basically this:
- Some test goes bad (which one? - basically what we were trying to figure out)
- After 900 sec (forkedProcessTimeoutInSeconds), surefire plugin issues 
'shutdown' to *all* forked JVMs (that's what we see *.dump files), basically 
killing every test running at that time.
** Side note: if findHangingTests.py reports X number of hanging tests, and we 
have surefire forkcount as Y, then there should be *at least* ceiling[X/Y] 
count of the following message in each *.dump file.
{noformat}
# Created on 2018-01-25T12:14:47.114
Killing self fork JVM. Received SHUTDOWN command from Maven shutdown hook.
{noformat}

With that figured, it was easy to find culprit tests.
Look for timestamp of "Killed self fork..." messages in dump file and find the 
tests which started *exactly 900 sec before it* Any hanging test (as reported 
by our script) with start timestamp between these two times was just caught in 
cross fire.

Applying the method to #207 run 
(https://builds.apache.org/job/HBase%20Nightly/job/master/207/) will reveal 
these three culprit tests:
- security.token.TestTokenAuthentication
- master.balancer.TestStochasticLoadBalancer
- regionserver.TestRegionServerReportForDuty

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19818:
---
Attachment: HBASE-19818.branch-2.patch

> Scan time limit not work if the filter always filter row key
> 
>
> Key: HBASE-19818
> URL: https://issues.apache.org/jira/browse/HBASE-19818
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19818.branch-2.patch, HBASE-19818.master.003.patch
>
>
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java]
> nextInternal() method.
> {code:java}
> // Check if rowkey filter wants to exclude this row. If so, loop to next.
>  // Technically, if we hit limits before on this row, we don't need this call.
>  if (filterRowKey(current)) {
>  incrementCountOfRowsFilteredMetric(scannerContext);
>  // early check, see HBASE-16296
>  if (isFilterDoneInternal()) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  // Typically the count of rows scanned is incremented inside 
> #populateResult. However,
>  // here we are filtering a row based purely on its row key, preventing us 
> from calling
>  // #populateResult. Thus, perform the necessary increment here to rows 
> scanned metric
>  incrementCountOfRowsScannedMetric(scannerContext);
>  boolean moreRows = nextRow(scannerContext, current);
>  if (!moreRows) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  results.clear();
>  continue;
>  }
> // Ok, we are good, let's try to get some results from the main heap.
>  populateResult(results, this.storeHeap, scannerContext, current);
>  if (scannerContext.checkAnyLimitReached(LimitScope.BETWEEN_CELLS)) {
>  if (hasFilterRow) {
>  throw new IncompatibleFilterException(
>  "Filter whose hasFilterRow() returns true is incompatible with scans that 
> must "
>  + " stop mid-row because of a limit. ScannerContext:" + scannerContext);
>  }
>  return true;
>  }
> {code}
> If filterRowKey always return ture, then it skip to checkAnyLimitReached. For 
> batch/size limit, it is ok to skip as we don't read anything. But for time 
> limit, it is not right. If the filter always filter row key, we will stuck 
> here for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340421#comment-16340421
 ] 

Guanghao Zhang commented on HBASE-19818:


The failed ut passed locally. Attach the patch again.

> Scan time limit not work if the filter always filter row key
> 
>
> Key: HBASE-19818
> URL: https://issues.apache.org/jira/browse/HBASE-19818
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19818.branch-2.patch, HBASE-19818.master.003.patch
>
>
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java]
> nextInternal() method.
> {code:java}
> // Check if rowkey filter wants to exclude this row. If so, loop to next.
>  // Technically, if we hit limits before on this row, we don't need this call.
>  if (filterRowKey(current)) {
>  incrementCountOfRowsFilteredMetric(scannerContext);
>  // early check, see HBASE-16296
>  if (isFilterDoneInternal()) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  // Typically the count of rows scanned is incremented inside 
> #populateResult. However,
>  // here we are filtering a row based purely on its row key, preventing us 
> from calling
>  // #populateResult. Thus, perform the necessary increment here to rows 
> scanned metric
>  incrementCountOfRowsScannedMetric(scannerContext);
>  boolean moreRows = nextRow(scannerContext, current);
>  if (!moreRows) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  results.clear();
>  continue;
>  }
> // Ok, we are good, let's try to get some results from the main heap.
>  populateResult(results, this.storeHeap, scannerContext, current);
>  if (scannerContext.checkAnyLimitReached(LimitScope.BETWEEN_CELLS)) {
>  if (hasFilterRow) {
>  throw new IncompatibleFilterException(
>  "Filter whose hasFilterRow() returns true is incompatible with scans that 
> must "
>  + " stop mid-row because of a limit. ScannerContext:" + scannerContext);
>  }
>  return true;
>  }
> {code}
> If filterRowKey always return ture, then it skip to checkAnyLimitReached. For 
> batch/size limit, it is ok to skip as we don't read anything. But for time 
> limit, it is not right. If the filter always filter row key, we will stuck 
> here for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19818:
---
Attachment: (was: HBASE-19818.branch-2.patch)

> Scan time limit not work if the filter always filter row key
> 
>
> Key: HBASE-19818
> URL: https://issues.apache.org/jira/browse/HBASE-19818
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19818.master.003.patch
>
>
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java]
> nextInternal() method.
> {code:java}
> // Check if rowkey filter wants to exclude this row. If so, loop to next.
>  // Technically, if we hit limits before on this row, we don't need this call.
>  if (filterRowKey(current)) {
>  incrementCountOfRowsFilteredMetric(scannerContext);
>  // early check, see HBASE-16296
>  if (isFilterDoneInternal()) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  // Typically the count of rows scanned is incremented inside 
> #populateResult. However,
>  // here we are filtering a row based purely on its row key, preventing us 
> from calling
>  // #populateResult. Thus, perform the necessary increment here to rows 
> scanned metric
>  incrementCountOfRowsScannedMetric(scannerContext);
>  boolean moreRows = nextRow(scannerContext, current);
>  if (!moreRows) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  results.clear();
>  continue;
>  }
> // Ok, we are good, let's try to get some results from the main heap.
>  populateResult(results, this.storeHeap, scannerContext, current);
>  if (scannerContext.checkAnyLimitReached(LimitScope.BETWEEN_CELLS)) {
>  if (hasFilterRow) {
>  throw new IncompatibleFilterException(
>  "Filter whose hasFilterRow() returns true is incompatible with scans that 
> must "
>  + " stop mid-row because of a limit. ScannerContext:" + scannerContext);
>  }
>  return true;
>  }
> {code}
> If filterRowKey always return ture, then it skip to checkAnyLimitReached. For 
> batch/size limit, it is ok to skip as we don't read anything. But for time 
> limit, it is not right. If the filter always filter row key, we will stuck 
> here for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340413#comment-16340413
 ] 

Guanghao Zhang commented on HBASE-19857:


{quote} Let's use pb to keep the same pattern with other sutffs on zk? Can open 
a new issue to address it.
{quote}
Ok. Let me take a look about this.

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-19857-HBASE-19064-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19857:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HBASE-19064
   Status: Resolved  (was: Patch Available)

Pushed to branch HBASE-19064.

Thanks [~zghaobac] for reviewing.

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: HBASE-19064
>
> Attachments: HBASE-19857-HBASE-19064-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340348#comment-16340348
 ] 

Hadoop QA commented on HBASE-19862:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
21s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
54s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
9s{color} | {color:red} hbase-server: The patch generated 29 new + 4 unchanged 
- 9 fixed = 33 total (was 13) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
21s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 50s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 43s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.TestJMXListener |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-19862 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907774/HBASE-19862.branch-2.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3baa0f25b0c8 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / 72f4e98ed1 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11198/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11198/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11198/testReport/ |
| modules | C: hbase-server U: hbase-server |
| C

[jira] [Created] (HBASE-19865) Add UT for sync replication peer in DA state

2018-01-25 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19865:
-

 Summary: Add UT for sync replication peer in DA state
 Key: HBASE-19865
 URL: https://issues.apache.org/jira/browse/HBASE-19865
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang


To confirm that it works just like a normal replication peer which can still 
replicate data asynchronously to peer cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19864) Use protobuf instead of enum.ordinal to store SyncReplicationState

2018-01-25 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19864:
-

 Summary: Use protobuf instead of enum.ordinal to store 
SyncReplicationState
 Key: HBASE-19864
 URL: https://issues.apache.org/jira/browse/HBASE-19864
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340345#comment-16340345
 ] 

Duo Zhang commented on HBASE-19857:
---

Let me commit.

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-19857-HBASE-19064-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used

2018-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340334#comment-16340334
 ] 

Ted Yu commented on HBASE-19863:


This case first came up during the past weekend (I was on call).
The exception was first observed thru Phoenix query but can be reproduced in 
the given scenario thru shell scan.

The current workaround is to disable ROWCOL bloom for the column family but 
this reduces the scan performance.

> java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter 
> is used
> -
>
> Key: HBASE-19863
> URL: https://issues.apache.org/jira/browse/HBASE-19863
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 1.4.1
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>
> Under some circumstances scan with SingleColumnValueFilter may fail with an 
> exception
> {noformat} 
> java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, 
> qualifier=C2, timestamp=1516433595543, comparison result: 1 
> at 
> org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> {noformat}
> Conditions:
> table T with a single column family 0 that uses ROWCOL bloom filter 
> (important)  and column qualifiers C1,C2,C3,C4,C5. 
> When we fill the table for every row we put deleted cell for C3.
> The table has a single region with two HStore:
> A: start row: 0, stop row: 99 
> B: start row: 10 stop row: 99
> B has newer versions of rows 10-99. Store files have several blocks each 
> (important). 
> Store A is the result of major compaction,  so it doesn't have any deleted 
> cells (important).
> So, we are running a scan like:
> {noformat}
> scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter 
> ('0','C5',=,'binary:whatever')"}
> {noformat}  
> How the scan performs:
> First, we iterate A for rows 0 and 1 without any problems. 
> Next, we start to iterate A for row 10, so read the first cell and set hfs 
> scanner to A :
> 10:0/C1/0/Put/x but found that we have a newer version of the cell in B : 
> 10:0/C1/1/Put/x, 
> so we make B as our current store scanner. Since we are looking for 
> particular columns 
> C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn 
> which 
> would run reseek for all store scanners.
> For store A the following magic would happen in requestSeek:
>   1. bloom filter check passesGeneralBloomFilter would set haveToSeek to 
> false because row 10 doesn't have C3 qualifier in store A.  
>   2. Since we don't have to seek we just create a fake row 
> 10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for 
> us and it commented with :
> {noformat}
>  // Multi-column Bloom filter optimization.
> // Create a fake key/value, so that this scanner only bubbles up to the 
> top
> // of the KeyValueHeap in StoreScanner after we scanned this row/column in
> // all other store files. The query matcher will then just skip this fake
> // key/value and the store scanner will progress to the next column. This
> // is obviously not a "real real" seek, but unlike the fake KV earlier in
> // this method, we want this to be propagated to ScanQueryMatcher.
> {noformat}
> 
> For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum 
> to skip C3 entirely. 
> After that we start searching for qualifier C5 using seekOrSkipToNextColumn 
> which run first trySkipToNextColumn:
> {noformat}
>   protected boolean trySkipToNextColumn(Cell cell) throws IOException {
>

[jira] [Created] (HBASE-19863) java.lang.IllegalStateException: isDelete failed when SingleColumnValueFilter is used

2018-01-25 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created HBASE-19863:
---

 Summary: java.lang.IllegalStateException: isDelete failed when 
SingleColumnValueFilter is used
 Key: HBASE-19863
 URL: https://issues.apache.org/jira/browse/HBASE-19863
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 1.4.1
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


Under some circumstances scan with SingleColumnValueFilter may fail with an 
exception
{noformat} 
java.lang.IllegalStateException: isDelete failed: deleteBuffer=C3, 
qualifier=C2, timestamp=1516433595543, comparison result: 1 
at 
org.apache.hadoop.hbase.regionserver.ScanDeleteTracker.isDeleted(ScanDeleteTracker.java:149)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:386)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:545)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:5876)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6027)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5814)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2552)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
{noformat}
Conditions:
table T with a single column family 0 that uses ROWCOL bloom filter (important) 
 and column qualifiers C1,C2,C3,C4,C5. 
When we fill the table for every row we put deleted cell for C3.
The table has a single region with two HStore:
A: start row: 0, stop row: 99 
B: start row: 10 stop row: 99
B has newer versions of rows 10-99. Store files have several blocks each 
(important). 
Store A is the result of major compaction,  so it doesn't have any deleted 
cells (important).
So, we are running a scan like:
{noformat}
scan 'T', { COLUMNS => ['0:C3','0:C5'], FILTER => "SingleColumnValueFilter 
('0','C5',=,'binary:whatever')"}
{noformat}  
How the scan performs:
First, we iterate A for rows 0 and 1 without any problems. 
Next, we start to iterate A for row 10, so read the first cell and set hfs 
scanner to A :
10:0/C1/0/Put/x but found that we have a newer version of the cell in B : 
10:0/C1/1/Put/x, 
so we make B as our current store scanner. Since we are looking for particular 
columns 
C3 and C5, we perform the optimization StoreScanner.seekOrSkipToNextColumn 
which 
would run reseek for all store scanners.
For store A the following magic would happen in requestSeek:
  1. bloom filter check passesGeneralBloomFilter would set haveToSeek to false 
because row 10 doesn't have C3 qualifier in store A.  
  2. Since we don't have to seek we just create a fake row 
10:0/C3/OLDEST_TIMESTAMP/Maximum, an optimization that is quite important for 
us and it commented with :
{noformat}
 // Multi-column Bloom filter optimization.
// Create a fake key/value, so that this scanner only bubbles up to the top
// of the KeyValueHeap in StoreScanner after we scanned this row/column in
// all other store files. The query matcher will then just skip this fake
// key/value and the store scanner will progress to the next column. This
// is obviously not a "real real" seek, but unlike the fake KV earlier in
// this method, we want this to be propagated to ScanQueryMatcher.
{noformat}

For store B we would set it to fake 10:0/C3/createFirstOnRowColTS()/Maximum to 
skip C3 entirely. 
After that we start searching for qualifier C5 using seekOrSkipToNextColumn 
which run first trySkipToNextColumn:
{noformat}
  protected boolean trySkipToNextColumn(Cell cell) throws IOException {
Cell nextCell = null;
do {
  Cell nextIndexedKey = getNextIndexedKey();
  if (nextIndexedKey != null && nextIndexedKey != 
KeyValueScanner.NO_NEXT_INDEXED_KEY
  && matcher.compareKeyForNextColumn(nextIndexedKey, cell) >= 0) {
this.heap.next();
++kvsScanned;
  } else {
return false;
  }
} while ((nextCell = this.heap.peek()) != null && 
CellUtil.matchingRowColumn(cell, nextCell));
return true;
  }
{noformat}
If store has several blocks than nextIndexedKey would be not null and 
compareKeyForNextColumn wouldn't be negative, 
so we try to search forward until we index or end of the row. But in 
this.heap.next(), the scanner for A bubbles up

[jira] [Commented] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340247#comment-16340247
 ] 

Hadoop QA commented on HBASE-19841:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 25 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
20s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
17s{color} | {color:red} hbase-server: The patch generated 2 new + 782 
unchanged - 0 fixed = 784 total (was 782) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
41s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 10s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 
22s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19841 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907763/19841.06.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  xml  |
| uname | Linux 5e759db9c627 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / aeffca497b |
| maven | version: Apache Maven 3.5.2 

[jira] [Commented] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340184#comment-16340184
 ] 

Appy commented on HBASE-19803:
--

Pushing and starting another nightly run.

I have a part which tells that this might not be useful since java always tries 
to core dump on vm crash (at worst, in /tmp dir), and if there were any core 
dumps happening, surefire plugin should have caught them anyways (irrespective 
of location) and generated a *.dumpstream file in surefire-reports.

I see 5 .dump files (not .dumpstream) in test_logs.zip of 
[https://builds.apache.org/job/HBase%20Nightly/job/branch-2/197|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/197.].
 That 
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun1.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun2.dump
-rw-r--r-- 1 appy staff 331B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun3.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun4.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun5.dump

What i don't understand is, why does surefire plugin try to stop all 5 jvms at 
exactly 21:55:35 and 22:43:34.

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-19803) False positive for the HBASE-Find-Flaky-Tests job

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340184#comment-16340184
 ] 

Appy edited comment on HBASE-19803 at 1/25/18 10:13 PM:


Pushing and starting another nightly run.

I have a part which tells that this might not be useful since java always tries 
to core dump on vm crash (at worst, in /tmp dir), and if there were any core 
dumps happening, surefire plugin should have caught them anyways (irrespective 
of location) and generated a *.dumpstream file in surefire-reports.

I see 5 .dump files (not .dumpstream) in test_logs.zip of 
[https://builds.apache.org/job/HBase%20Nightly/job/branch-2/197|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/197].
 That 
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun1.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun2.dump
-rw-r--r-- 1 appy staff 331B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun3.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun4.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun5.dump

What i don't understand is, why does surefire plugin try to stop all 5 jvms at 
exactly 21:55:35 and 22:43:34.


was (Author: appy):
Pushing and starting another nightly run.

I have a part which tells that this might not be useful since java always tries 
to core dump on vm crash (at worst, in /tmp dir), and if there were any core 
dumps happening, surefire plugin should have caught them anyways (irrespective 
of location) and generated a *.dumpstream file in surefire-reports.

I see 5 .dump files (not .dumpstream) in test_logs.zip of 
[https://builds.apache.org/job/HBase%20Nightly/job/branch-2/197|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/197.].
 That 
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun1.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun2.dump
-rw-r--r-- 1 appy staff 331B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun3.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun4.dump
-rw-r--r-- 1 appy staff 226B Jan 23 14:43 2018-01-23T20-53-29_364-jvmRun5.dump

What i don't understand is, why does surefire plugin try to stop all 5 jvms at 
exactly 21:55:35 and 22:43:34.

> False positive for the HBASE-Find-Flaky-Tests job
> -
>
> Key: HBASE-19803
> URL: https://issues.apache.org/jira/browse/HBASE-19803
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Priority: Major
> Attachments: 2018-01-24T17-45-37_000-jvmRun1.dumpstream, 
> HBASE-19803.master.001.patch
>
>
> It reports two hangs for TestAsyncTableGetMultiThreaded, but I checked the 
> surefire output
> https://builds.apache.org/job/HBASE-Flaky-Tests/24830/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was likely to be killed in the middle of the run within 20 seconds.
> https://builds.apache.org/job/HBASE-Flaky-Tests/24852/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAsyncTableGetMultiThreaded-output.txt
> This one was also killed within about 1 minutes.
> The test is declared as LargeTests so the time limit should be 10 minutes. It 
> seems that the jvm may crash during the mvn test run and then we will kill 
> all the running tests and then we may mark some of them as hang which leads 
> to the false positive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16340177#comment-16340177
 ] 

Appy commented on HBASE-19862:
--

Ping [~stack].

> Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of 
> type HasRegionServerServices
> --
>
> Key: HBASE-19862
> URL: https://issues.apache.org/jira/browse/HBASE-19862
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Attachments: HBASE-19862.branch-2.001.patch
>
>
> We have temporary HasRegionServerServices (added in HBASE-19007) and concept 
> of CoreCoprocessors which require that whichever *CoprocessorEnvironment they 
> get, it should also implement HasRegionServerServices.
> This test builds mock RegionCpEnv for TokenProvider (RegionCoprocessor), but 
> it falls short of what's expected and results in following exceptions in test 
> logs
> {noformat}
> 2018-01-25 14:38:54,855 ERROR [TokenServer:d9a9782cd075,39492,1516891133911] 
> helpers.MarkerIgnoringBase(159): Aborting on: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
>   at 
> org.apache.hadoop.hbase.security.token.TokenProvider.start(TokenProvider.java:70)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.initialize(TestTokenAuthentication.java:275)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.run(TestTokenAuthentication.java:347)
> {noformat}
> Patch adds the missing interface to the mock. Also, uses Mockito to mock the 
> interfaces rather the crude way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19862:
-
Status: Patch Available  (was: Open)

> Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of 
> type HasRegionServerServices
> --
>
> Key: HBASE-19862
> URL: https://issues.apache.org/jira/browse/HBASE-19862
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Attachments: HBASE-19862.branch-2.001.patch
>
>
> We have temporary HasRegionServerServices (added in HBASE-19007) and concept 
> of CoreCoprocessors which require that whichever *CoprocessorEnvironment they 
> get, it should also implement HasRegionServerServices.
> This test builds mock RegionCpEnv for TokenProvider (RegionCoprocessor), but 
> it falls short of what's expected and results in following exceptions in test 
> logs
> {noformat}
> 2018-01-25 14:38:54,855 ERROR [TokenServer:d9a9782cd075,39492,1516891133911] 
> helpers.MarkerIgnoringBase(159): Aborting on: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
>   at 
> org.apache.hadoop.hbase.security.token.TokenProvider.start(TokenProvider.java:70)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.initialize(TestTokenAuthentication.java:275)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.run(TestTokenAuthentication.java:347)
> {noformat}
> Patch adds the missing interface to the mock. Also, uses Mockito to mock the 
> interfaces rather the crude way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19862:
-
Attachment: HBASE-19862.branch-2.001.patch

> Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of 
> type HasRegionServerServices
> --
>
> Key: HBASE-19862
> URL: https://issues.apache.org/jira/browse/HBASE-19862
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
>Priority: Major
> Attachments: HBASE-19862.branch-2.001.patch
>
>
> We have temporary HasRegionServerServices (added in HBASE-19007) and concept 
> of CoreCoprocessors which require that whichever *CoprocessorEnvironment they 
> get, it should also implement HasRegionServerServices.
> This test builds mock RegionCpEnv for TokenProvider (RegionCoprocessor), but 
> it falls short of what's expected and results in following exceptions in test 
> logs
> {noformat}
> 2018-01-25 14:38:54,855 ERROR [TokenServer:d9a9782cd075,39492,1516891133911] 
> helpers.MarkerIgnoringBase(159): Aborting on: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
> java.lang.ClassCastException: 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
> cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
>   at 
> org.apache.hadoop.hbase.security.token.TokenProvider.start(TokenProvider.java:70)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.initialize(TestTokenAuthentication.java:275)
>   at 
> org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.run(TestTokenAuthentication.java:347)
> {noformat}
> Patch adds the missing interface to the mock. Also, uses Mockito to mock the 
> interfaces rather the crude way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-19862) Fix TestTokenAuthentication - fake RegionCoprocessorEnvironment is not of type HasRegionServerServices

2018-01-25 Thread Appy (JIRA)
Appy created HBASE-19862:


 Summary: Fix TestTokenAuthentication - fake 
RegionCoprocessorEnvironment is not of type HasRegionServerServices
 Key: HBASE-19862
 URL: https://issues.apache.org/jira/browse/HBASE-19862
 Project: HBase
  Issue Type: Bug
Reporter: Appy
Assignee: Appy


We have temporary HasRegionServerServices (added in HBASE-19007) and concept of 
CoreCoprocessors which require that whichever *CoprocessorEnvironment they get, 
it should also implement HasRegionServerServices.
This test builds mock RegionCpEnv for TokenProvider (RegionCoprocessor), but it 
falls short of what's expected and results in following exceptions in test logs
{noformat}
2018-01-25 14:38:54,855 ERROR [TokenServer:d9a9782cd075,39492,1516891133911] 
helpers.MarkerIgnoringBase(159): Aborting on: 
org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
java.lang.ClassCastException: 
org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer$2 
cannot be cast to org.apache.hadoop.hbase.coprocessor.HasRegionServerServices
at 
org.apache.hadoop.hbase.security.token.TokenProvider.start(TokenProvider.java:70)
at 
org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.initialize(TestTokenAuthentication.java:275)
at 
org.apache.hadoop.hbase.security.token.TestTokenAuthentication$TokenServer.run(TestTokenAuthentication.java:347)
{noformat}

Patch adds the missing interface to the mock. Also, uses Mockito to mock the 
interfaces rather the crude way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19841:
--
Attachment: 19841.06.patch

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.06.patch, 19841.v0.txt, 19841.v1.txt, 
> HBASE-19841.v2.patch, HBASE-19841.v3.patch, HBASE-19841.v4.patch, 
> HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19841:
--
Assignee: Mike Drob  (was: Ted Yu)
  Status: Patch Available  (was: Open)

v6: hopefully this patch is named correctly to work around yetus' limitations

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Mike Drob
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.06.patch, 19841.v0.txt, 19841.v1.txt, 
> HBASE-19841.v2.patch, HBASE-19841.v3.patch, HBASE-19841.v4.patch, 
> HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19841:
--
Attachment: 19841.v06.patch

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.v0.txt, 19841.v1.txt, HBASE-19841.v2.patch, 
> HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated HBASE-19841:
--
Attachment: (was: 19841.v06.patch)

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.v0.txt, 19841.v1.txt, HBASE-19841.v2.patch, 
> HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19846) Fix findbugs and error-prone warnings in hbase-rest (branch-2)

2018-01-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339822#comment-16339822
 ] 

Hudson commented on HBASE-19846:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #4467 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/4467/])
HBASE-19846 Fix findbugs and error-prone warnings in hbase-rest (tedyu: rev 
aeffca497bf36ea12f89a5f92d2f918b010741fc)
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableRegionModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableSchemaModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestVersionModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableResource.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/MultiRowResource.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannerResource.java
* (edit) hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/RowResourceBase.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestSchemaResource.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGetAndPutResource.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestMultiRowResource.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestNamespacesInstanceModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestRowModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterStatusModel.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/model/RowModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestModelBase.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/PlainTextMessageBodyProducer.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableListModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestStorageClusterVersionModel.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestNamespacesModel.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/producer/ProtobufMessageBodyProducer.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithFilters.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestStatusResource.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestDeleteRow.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestTableInfoModel.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/RowResultGenerator.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/client/TestXmlParsing.java
* (edit) 
hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/provider/consumer/ProtobufMessageBodyConsumer.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestColumnSchemaModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestCellSetModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestTableScan.java
* (edit) 
hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/TestVersionResource.java


> Fix findbugs and error-prone warnings in hbase-rest (branch-2)
> --
>
> Key: HBASE-19846
> URL: https://issues.apache.org/jira/browse/HBASE-19846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19846.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339714#comment-16339714
 ] 

Josh Elser commented on HBASE-19841:


Ahhh, of course. Thanks for that explanation.

+1 on Precommit

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.v0.txt, 19841.v1.txt, HBASE-19841.v2.patch, 
> HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19859) Update download page header for 1.1 EOL

2018-01-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339640#comment-16339640
 ] 

Mike Drob commented on HBASE-19859:
---

Did we officially move the stable pointer? I thought 1.4.x was going to get 
additional burn-in time, but very possible I missed the email thread.

cc: [~apurtell]

> Update download page header for 1.1 EOL
> ---
>
> Key: HBASE-19859
> URL: https://issues.apache.org/jira/browse/HBASE-19859
> Project: HBase
>  Issue Type: Task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-19583.patch
>
>
> See example mirror: http://mirrors.ocf.berkeley.edu/apache/hbase/
> They still claim that 1.1 is under active development.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339637#comment-16339637
 ] 

Mike Drob commented on HBASE-19841:
---

There's a file system cache, so when you call get FileSystem, you get the copy 
of the file system that was created before we modified the configuration to add 
our property.

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.v0.txt, 19841.v1.txt, HBASE-19841.v2.patch, 
> HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2018-01-25 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339615#comment-16339615
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 1/25/18 6:43 PM:


[~appy] we have fully functional module already, but you suggest rewriting 
20%-40% of code.  That is why my response is so strong. As for procv2, I have 
heard a lot from other developers who worked on procv2-related  bugs. 

Backup is not like table create, truncate, split etc - it is in its own league. 

{quote}

What would make it mature & robust enough for B&R in your opinion?

{quote}

2-3 years of bug fixing :)

For concurrent sessions, as I said already it is doable, but will require a lot 
of efforts, especially in testing. Can you tell me, why do you think my 
approach (suggested) is not good enough? In a case when only ADMIN can run 
operations, what is the use case, where truly concurrent sessions are must?  

 


was (Author: vrodionov):
[~appy] we have fully functional module already, but you suggest rewriting 
20%-40% of code.  That is why my response is so strong. As for procv2, I have 
heard a lot from other developers who worked on procv2-related  bugs. 

Backup is not like table create, truncate, split etc - it is in its own league. 

For concurrent sessions, as I said already it is doable, but will require a lot 
of efforts, especially in testing. Can you tell me, why do you think my 
approach (suggested) is not good enough? In a case when only ADMIN can run 
operations, what is the use case, where truly concurrent sessions are must?  

 

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-17852-v10.patch, screenshot-1.png
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2018-01-25 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339615#comment-16339615
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 1/25/18 6:42 PM:


[~appy] we have fully functional module already, but you suggest rewriting 
20%-40% of code.  That is why my response is so strong. As for procv2, I have 
heard a lot from other developers who worked on procv2-related  bugs. 

Backup is not like table create, truncate, split etc - it is in its own league. 

For concurrent sessions, as I said already it is doable, but will require a lot 
of efforts, especially in testing. Can you tell me, why do you think my 
approach (suggested) is not good enough? In a case when only ADMIN can run 
operations, what is the use case, where truly concurrent sessions are must?  

 


was (Author: vrodionov):
[~appy] we have fully functional module already, but you suggest rewriting 
20%-40% of code.  That is why my response is so strong. As for procv2, I have 
heard a lot from other developers who worked on procv2-related  bugs. 

Backup is not like table create, truncate, split etc - it is in its own league. 

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-17852-v10.patch, screenshot-1.png
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util

2018-01-25 Thread DM (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339623#comment-16339623
 ] 

DM commented on HBASE-15666:


Thanks [~mdrob] for looking into this. Let me know if you need any help from my 
side.

> shaded dependencies for hbase-testing-util
> --
>
> Key: HBASE-15666
> URL: https://issues.apache.org/jira/browse/HBASE-15666
> Project: HBase
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
>
> Folks that make use of our shaded client but then want to test things using 
> the hbase-testing-util end up getting all of our dependencies again in the 
> test scope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2018-01-25 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339615#comment-16339615
 ] 

Vladimir Rodionov commented on HBASE-17852:
---

[~appy] we have fully functional module already, but you suggest rewriting 
20%-40% of code.  That is why my response is so strong. As for procv2, I have 
heard a lot from other developers who worked on procv2-related  bugs. 

Backup is not like table create, truncate, split etc - it is in its own league. 

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-17852-v10.patch, screenshot-1.png
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-01-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-19848:
--

Assignee: Key Hutu

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Assignee: Key Hutu
>Priority: Major
>  Labels: performance
> Fix For: 1.2.0
>
> Attachments: HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-01-25 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339601#comment-16339601
 ] 

huaxiang sun commented on HBASE-19848:
--

Hi [~Key Hutu], as Ted mentioned, you can use the submit-patch script to submit 
the patch. Or use git diff to create a patch by 

"git diff --no-prefix". [~yuzhih...@gmail.com] and [~saint@gmail.com], can 
you add [~Key Hutu] to contributor list so he can assign the Jira to himself? 
Thanks.

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Priority: Major
>  Labels: performance
> Fix For: 1.2.0
>
> Attachments: HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-19631) Allow building HBase 1.5.x against Hadoop 3.0.0

2018-01-25 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-19631.
---
Resolution: Fixed

Committed to hbase-1 branch.

> Allow building HBase 1.5.x against Hadoop 3.0.0
> ---
>
> Key: HBASE-19631
> URL: https://issues.apache.org/jira/browse/HBASE-19631
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: 19631.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19631) Allow building HBase 1.5.x against Hadoop 3.0.0

2018-01-25 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-19631:
--
Fix Version/s: (was: 1.4.0)
   1.5.0

> Allow building HBase 1.5.x against Hadoop 3.0.0
> ---
>
> Key: HBASE-19631
> URL: https://issues.apache.org/jira/browse/HBASE-19631
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: 19631.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19840) Flakey TestMetaWithReplicas

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339521#comment-16339521
 ] 

Hadoop QA commented on HBASE-19840:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
9s{color} | {color:red} hbase-server: The patch generated 5 new + 244 unchanged 
- 12 fixed = 249 total (was 256) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m  7s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}107m  7s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19840 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907631/HBASE-19840.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 373add6e51fc 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ce50830a0a |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| checkstyle | 
https://builds.apache.org/job/PreC

[jira] [Commented] (HBASE-15381) Implement a distributed MOB compaction by procedure

2018-01-25 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339492#comment-16339492
 ] 

huaxiang sun commented on HBASE-15381:
--

Hi [~jmspaggi], we should pick it up.

> Implement a distributed MOB compaction by procedure
> ---
>
> Key: HBASE-15381
> URL: https://issues.apache.org/jira/browse/HBASE-15381
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Major
> Attachments: HBASE-15381-v2.patch, HBASE-15381-v3.patch, 
> HBASE-15381-v4.patch, HBASE-15381-v5.patch, HBASE-15381-v6.patch, 
> HBASE-15381.patch, mob distributed compaction design-v2.pdf, mob distributed 
> compaction design.pdf
>
>
> In MOB, there is a periodical compaction which runs in HMaster (It can be 
> disabled by configuration), some small mob files are merged into bigger ones. 
> Now the compaction only runs in HMaster which is not efficient and might 
> impact the running of HMaster. In this JIRA, a distributed MOB compaction is 
> introduced, it is triggered by HMaster, but all the compaction jobs are 
> distributed to HRegionServers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339476#comment-16339476
 ] 

Josh Elser commented on HBASE-19841:


{code}

diff --git 
a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java 
b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
index 9efec07915..bb98c407b4 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
@@ -394,7 +394,13 @@ public abstract class CommonFSUtils {
 
   public static FileSystem getWALFileSystem(final Configuration c) throws 
IOException {
 Path p = getWALRootDir(c);
-return p.getFileSystem(c);
+FileSystem fs = p.getFileSystem(c);
+// Need to copy this to the new filesystem we are returning in case it is 
localFS
+String enforceStreamCapabilities = 
c.get(CommonFSUtils.UNSAFE_STREAM_CAPABILITY_ENFORCE);
+if (enforceStreamCapabilities != null) {
+  fs.getConf().set(CommonFSUtils.UNSAFE_STREAM_CAPABILITY_ENFORCE, 
enforceStreamCapabilities);
+}
+return fs;
   }
{code}

I'm surprised/confused by this. I would have thought that the FS's 
configuration would have been the Configuration which you provided when 
constructing it from the Path. I assume that must not be the case?

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.v0.txt, 19841.v1.txt, HBASE-19841.v2.patch, 
> HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util

2018-01-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339434#comment-16339434
 ] 

Mike Drob commented on HBASE-15666:
---

bq. so either we need a new module, i.e. 
hbase-shaded/hbase-shaded-testing-util, that shades the hbase-testing-util 
module in the same way we shade the client
I just tried this and without further messing of the defaults, it only 
relocates junit stuff and excludes everything else. Dependency resolution is 
hard.

> shaded dependencies for hbase-testing-util
> --
>
> Key: HBASE-15666
> URL: https://issues.apache.org/jira/browse/HBASE-15666
> Project: HBase
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
>
> Folks that make use of our shaded client but then want to test things using 
> the hbase-testing-util end up getting all of our dependencies again in the 
> test scope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-01-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19848:
---
Status: Open  (was: Patch Available)

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Priority: Major
>  Labels: performance
> Fix For: 1.2.0
>
> Attachments: HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-01-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339412#comment-16339412
 ] 

Ted Yu commented on HBASE-19848:


You can use dev-support/submit-patch.py to submit patch to this JIRA.

QA bot wouldn't accept the java file for test run.

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Priority: Major
>  Labels: performance
> Fix For: 1.2.0
>
> Attachments: HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15666) shaded dependencies for hbase-testing-util

2018-01-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339395#comment-16339395
 ] 

Mike Drob commented on HBASE-15666:
---

This is a really interesting set of failures...

Running your project as is against 1.2.0, {{mvn test}} gives me a stack trace 
but actually completes.

Swapping to 1.4.0, I get a CNFE and the test fails to run.
{noformat}
testHbase(com.repro.hbase.TestHBase)  Time elapsed: 2.227 sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/hbase/shaded/org/mortbay/jetty/servlet/Context
{noformat}

Swapping further to 2.0.0-beta-1, and replacing shaded server with shaded 
mapreduce the test fails with:
{noformat}
testHbase(com.repro.hbase.TestHBase)  Time elapsed: 2.461 sec  <<< ERROR!
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hdfs.web.WebHdfsFileSystem
{noformat}

I'm not sure if these are all the same problem or different problems, but I'll 
start poking at them separately.

> shaded dependencies for hbase-testing-util
> --
>
> Key: HBASE-15666
> URL: https://issues.apache.org/jira/browse/HBASE-15666
> Project: HBase
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 1.1.0, 1.2.0
>Reporter: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0, 1.5.0
>
>
> Folks that make use of our shaded client but then want to test things using 
> the hbase-testing-util end up getting all of our dependencies again in the 
> test scope.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19846) Fix findbugs and error-prone warnings in hbase-rest (branch-2)

2018-01-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339387#comment-16339387
 ] 

Peter Somogyi commented on HBASE-19846:
---

Thank you for the review!

> Fix findbugs and error-prone warnings in hbase-rest (branch-2)
> --
>
> Key: HBASE-19846
> URL: https://issues.apache.org/jira/browse/HBASE-19846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19846.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19846) Fix findbugs and error-prone warnings in hbase-rest (branch-2)

2018-01-25 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19846:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch, Peter.

> Fix findbugs and error-prone warnings in hbase-rest (branch-2)
> --
>
> Key: HBASE-19846
> URL: https://issues.apache.org/jira/browse/HBASE-19846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19846.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-18841) ./bin/hbase ltt and pe cannot find their classes when in dev/build context

2018-01-25 Thread Peter Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi resolved HBASE-18841.
---
Resolution: Cannot Reproduce

> ./bin/hbase ltt and pe cannot find their classes when in dev/build context
> --
>
> Key: HBASE-18841
> URL: https://issues.apache.org/jira/browse/HBASE-18841
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Minor
>
> If I run the below out of a built checkout, it fails unable to find the main 
> in named LoadTestTool class:
> ./bin/hbase ltt
> Ditto for:
> ./bin/hbase pe
> The main classes are in *-test.jars which we do not include in our 
> cached_classpath.txt file that is our trick for making stuff work in dev 
> context.
> Investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-18841) ./bin/hbase ltt and pe cannot find their classes when in dev/build context

2018-01-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339373#comment-16339373
 ] 

Peter Somogyi commented on HBASE-18841:
---

On master branch I ran {{mvn clean install -DskipTests}} and after I was able 
to run LoadTestTool and PerformanceEvaluation.

> ./bin/hbase ltt and pe cannot find their classes when in dev/build context
> --
>
> Key: HBASE-18841
> URL: https://issues.apache.org/jira/browse/HBASE-18841
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Priority: Minor
>
> If I run the below out of a built checkout, it fails unable to find the main 
> in named LoadTestTool class:
> ./bin/hbase ltt
> Ditto for:
> ./bin/hbase pe
> The main classes are in *-test.jars which we do not include in our 
> cached_classpath.txt file that is our trick for making stuff work in dev 
> context.
> Investigate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19841) Tests against hadoop3 fail with StreamLacksCapabilityException

2018-01-25 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339363#comment-16339363
 ] 

Mike Drob commented on HBASE-19841:
---

summoning mighty [~elserj] for review

> Tests against hadoop3 fail with StreamLacksCapabilityException
> --
>
> Key: HBASE-19841
> URL: https://issues.apache.org/jira/browse/HBASE-19841
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 19841.v0.txt, 19841.v1.txt, HBASE-19841.v2.patch, 
> HBASE-19841.v3.patch, HBASE-19841.v4.patch, HBASE-19841.v5.patch
>
>
> The following can be observed running against hadoop3:
> {code}
> java.io.IOException: cannot get log writer
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> Caused by: 
> org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: 
> hflush and hsync
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.compactingSetUp(TestCompactingMemStore.java:107)
>   at 
> org.apache.hadoop.hbase.regionserver.TestCompactingMemStore.setUp(TestCompactingMemStore.java:89)
> {code}
> This was due to hbase-server/src/test/resources/hbase-site.xml not being 
> picked up by Configuration object. Among the configs from this file, the 
> value for "hbase.unsafe.stream.capability.enforce" relaxes check for presence 
> of hflush and hsync. Without this config entry,  
> StreamLacksCapabilityException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339332#comment-16339332
 ] 

Hadoop QA commented on HBASE-19857:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-19064 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} HBASE-19064 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
38s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}121m 
50s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}169m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19857 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907670/HBASE-19857-HBASE-19064-v1.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3606cf1918e4 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | HBASE-19064 / 8ffba491a7 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-

[jira] [Updated] (HBASE-8963) Add configuration option to skip HFile archiving

2018-01-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8963:
-
Priority: Critical  (was: Major)

> Add configuration option to skip HFile archiving
> 
>
> Key: HBASE-8963
> URL: https://issues.apache.org/jira/browse/HBASE-8963
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 8963-v10.txt, HBASE-8963.trunk.v1.patch, 
> HBASE-8963.trunk.v2.patch, HBASE-8963.trunk.v3.patch, 
> HBASE-8963.trunk.v4.patch, HBASE-8963.trunk.v5.patch, 
> HBASE-8963.trunk.v6.patch, HBASE-8963.trunk.v7.patch, 
> HBASE-8963.trunk.v8.patch, HBASE-8963.trunk.v9.patch
>
>
> Currently HFileArchiver is always called when a table is dropped or compacted.
> A configuration option (either global or per table) should be provided so 
> that archiving can be skipped when table is deleted or compacted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19840) Flakey TestMetaWithReplicas

2018-01-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19840:
--
Status: Patch Available  (was: Open)

> Flakey TestMetaWithReplicas
> ---
>
> Key: HBASE-19840
> URL: https://issues.apache.org/jira/browse/HBASE-19840
> Project: HBase
>  Issue Type: Sub-task
>  Components: flakey, test
>Reporter: stack
>Assignee: stack
>Priority: Major
> Attachments: HBASE-19840.master.001.patch
>
>
> Failing about 15% of the time..  In testShutdownHandling.. 
> [https://builds.apache.org/view/H-L/view/HBase/job/HBase-Find-Flaky-Tests-branch2.0/lastSuccessfulBuild/artifact/dashboard.html]
>  
> Adding some debug. Its hard to follow what is going on in this test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19846) Fix findbugs and error-prone warnings in hbase-rest (branch-2)

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339250#comment-16339250
 ] 

Hadoop QA commented on HBASE-19846:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
56s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 29 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
58s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} hbase-rest: The patch generated 0 new + 236 
unchanged - 39 fixed = 236 total (was 275) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
18m 14s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19846 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907685/HBASE-19846.master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0522bf618616 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / ce50830a0a |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11194/testReport/ |
| modules | C: hbase-rest U: hbase-rest |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11194/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Fix findbugs and error-prone warnings in hbase-rest (branch-2)
> --
>
>   

[jira] [Commented] (HBASE-19400) Add missing security hooks for MasterService RPCs

2018-01-25 Thread Balazs Meszaros (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339217#comment-16339217
 ] 

Balazs Meszaros commented on HBASE-19400:
-

Ok [~appy], you can take it :)

> Add missing security hooks for MasterService RPCs
> -
>
> Key: HBASE-19400
> URL: https://issues.apache.org/jira/browse/HBASE-19400
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Balazs Meszaros
>Assignee: Balazs Meszaros
>Priority: Major
> Attachments: HBASE-19400.master.001.patch, 
> HBASE-19400.master.002.patch
>
>
> The following RPC methods do not call the observers, therefore they are not 
> guarded by AccessController:
> - normalize
> - setNormalizerRunning
> - runCatalogScan
> - enableCatalogJanitor
> - runCleanerChore
> - setCleanerChoreRunning
> - execMasterService
> - execProcedure
> - execProcedureWithRet



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19846) Fix findbugs and error-prone warnings in hbase-rest (branch-2)

2018-01-25 Thread Peter Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-19846:
--
Status: Patch Available  (was: Open)

> Fix findbugs and error-prone warnings in hbase-rest (branch-2)
> --
>
> Key: HBASE-19846
> URL: https://issues.apache.org/jira/browse/HBASE-19846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19846.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19846) Fix findbugs and error-prone warnings in hbase-rest (branch-2)

2018-01-25 Thread Peter Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-19846:
--
Attachment: HBASE-19846.master.001.patch

> Fix findbugs and error-prone warnings in hbase-rest (branch-2)
> --
>
> Key: HBASE-19846
> URL: https://issues.apache.org/jira/browse/HBASE-19846
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: HBASE-19846.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-01-25 Thread Key Hutu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Key Hutu updated HBASE-19848:
-
Status: Patch Available  (was: Open)

At bulkload and hbaseBulkLoadThinRows method , do close()

I submit a patch file in attachment

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Priority: Major
>  Labels: performance
> Fix For: 1.2.0
>
> Attachments: HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19848) Zookeeper thread leaks in hbase-spark bulkLoad method

2018-01-25 Thread Key Hutu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Key Hutu updated HBASE-19848:
-
Attachment: HBaseContext.scala

> Zookeeper thread leaks in hbase-spark bulkLoad method
> -
>
> Key: HBASE-19848
> URL: https://issues.apache.org/jira/browse/HBASE-19848
> Project: HBase
>  Issue Type: Bug
>  Components: spark, Zookeeper
>Affects Versions: 1.2.0
> Environment: hbase-spark-1.2.0-cdh5.12.1 version
> spark 1.6
>Reporter: Key Hutu
>Priority: Major
>  Labels: performance
> Fix For: 1.2.0
>
> Attachments: HBaseContext.scala
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In hbase-spark project, HBaseContext provides bulkload methond for loading 
> spark rdd data to hbase easily.But when i using it frequently, the program 
> will throw "cannot create native thread" exception.
> using pstack command in spark driver process , the thread num is increasing 
> using jstack, named "main-SendThread" and "main-EventThread"  thread so many
> It seems like that , connection created before bulkload ,but close method 
> uninvoked at last



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339162#comment-16339162
 ] 

Hadoop QA commented on HBASE-19818:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} hbase-server: The patch generated 0 new + 253 
unchanged - 16 fixed = 253 total (was 269) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
58s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 46s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 14s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:9f2f2db |
| JIRA Issue | HBASE-19818 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907658/HBASE-19818.branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c4e890f60b4e 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2 / 130da9d18b |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11192/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11192/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11192/console |
| Powered by | 

[jira] [Commented] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339139#comment-16339139
 ] 

Duo Zhang commented on HBASE-19857:
---

Fix the failed UT.

And [~zghaobac], I do not think it is a good idea to use enum.ordinal as the 
serialized data. Let's use pb to keep the same pattern with other sutffs on zk? 
Can open a new issue to address it.

Thanks.

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-19857-HBASE-19064-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19857:
--
Attachment: (was: HBASE-19857-HBASE-19064.patch)

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-19857-HBASE-19064-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19857:
--
Attachment: HBASE-19857-HBASE-19064-v1.patch

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-19857-HBASE-19064-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339098#comment-16339098
 ] 

Hadoop QA commented on HBASE-19857:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-19064 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} HBASE-19064 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} HBASE-19064 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
15s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 37s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.5 2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hbase-replication in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 43s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.replication.regionserver.TestReplicationSourceManagerZkImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19857 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12907646/HBASE-19857-HBASE-19064.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3bada2da8c3c 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | HBASE-19064 / 8ffba491a7 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017

[jira] [Updated] (HBASE-19861) Avoid using RPCs when querying table infos for master status pages

2018-01-25 Thread Xiaolin Ha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Ha updated HBASE-19861:
---
Attachment: HBASE-19861.v1.patch

> Avoid using RPCs when querying table infos for master status pages
> --
>
> Key: HBASE-19861
> URL: https://issues.apache.org/jira/browse/HBASE-19861
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Attachments: HBASE-19861.v1.patch
>
>
> When querying table information for master status pages, currently method is 
> using admin interfaces. For example, when list user tables, codes are as 
> follows.
> Connection connection = master.getConnection();
> Admin admin = connection.getAdmin();
> try {
>  tables = admin.listTables();
> } finally {
>  admin.close();
> }
> But actually, we can get all user tables from master's memory.
> Using admin interfaces means using RPCs, which has a low efficiency.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19861) Avoid using RPCs when querying table infos for master status pages

2018-01-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19861:
---
Component/s: (was: monitoring)
 UI

> Avoid using RPCs when querying table infos for master status pages
> --
>
> Key: HBASE-19861
> URL: https://issues.apache.org/jira/browse/HBASE-19861
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
>
> When querying table information for master status pages, currently method is 
> using admin interfaces. For example, when list user tables, codes are as 
> follows.
> Connection connection = master.getConnection();
> Admin admin = connection.getAdmin();
> try {
>  tables = admin.listTables();
> } finally {
>  admin.close();
> }
> But actually, we can get all user tables from master's memory.
> Using admin interfaces means using RPCs, which has a low efficiency.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339029#comment-16339029
 ] 

Guanghao Zhang commented on HBASE-19818:


As ScannerContext is LimitedPrivate, so mark the methods as deprecated in 
branch-2 patch.

> Scan time limit not work if the filter always filter row key
> 
>
> Key: HBASE-19818
> URL: https://issues.apache.org/jira/browse/HBASE-19818
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19818.branch-2.patch, HBASE-19818.master.003.patch
>
>
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java]
> nextInternal() method.
> {code:java}
> // Check if rowkey filter wants to exclude this row. If so, loop to next.
>  // Technically, if we hit limits before on this row, we don't need this call.
>  if (filterRowKey(current)) {
>  incrementCountOfRowsFilteredMetric(scannerContext);
>  // early check, see HBASE-16296
>  if (isFilterDoneInternal()) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  // Typically the count of rows scanned is incremented inside 
> #populateResult. However,
>  // here we are filtering a row based purely on its row key, preventing us 
> from calling
>  // #populateResult. Thus, perform the necessary increment here to rows 
> scanned metric
>  incrementCountOfRowsScannedMetric(scannerContext);
>  boolean moreRows = nextRow(scannerContext, current);
>  if (!moreRows) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  results.clear();
>  continue;
>  }
> // Ok, we are good, let's try to get some results from the main heap.
>  populateResult(results, this.storeHeap, scannerContext, current);
>  if (scannerContext.checkAnyLimitReached(LimitScope.BETWEEN_CELLS)) {
>  if (hasFilterRow) {
>  throw new IncompatibleFilterException(
>  "Filter whose hasFilterRow() returns true is incompatible with scans that 
> must "
>  + " stop mid-row because of a limit. ScannerContext:" + scannerContext);
>  }
>  return true;
>  }
> {code}
> If filterRowKey always return ture, then it skip to checkAnyLimitReached. For 
> batch/size limit, it is ok to skip as we don't read anything. But for time 
> limit, it is not right. If the filter always filter row key, we will stuck 
> here for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19818) Scan time limit not work if the filter always filter row key

2018-01-25 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-19818:
---
Attachment: HBASE-19818.branch-2.patch

> Scan time limit not work if the filter always filter row key
> 
>
> Key: HBASE-19818
> URL: https://issues.apache.org/jira/browse/HBASE-19818
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.0-beta-2
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-19818.branch-2.patch, HBASE-19818.master.003.patch
>
>
> [https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java]
> nextInternal() method.
> {code:java}
> // Check if rowkey filter wants to exclude this row. If so, loop to next.
>  // Technically, if we hit limits before on this row, we don't need this call.
>  if (filterRowKey(current)) {
>  incrementCountOfRowsFilteredMetric(scannerContext);
>  // early check, see HBASE-16296
>  if (isFilterDoneInternal()) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  // Typically the count of rows scanned is incremented inside 
> #populateResult. However,
>  // here we are filtering a row based purely on its row key, preventing us 
> from calling
>  // #populateResult. Thus, perform the necessary increment here to rows 
> scanned metric
>  incrementCountOfRowsScannedMetric(scannerContext);
>  boolean moreRows = nextRow(scannerContext, current);
>  if (!moreRows) {
>  return 
> scannerContext.setScannerState(NextState.NO_MORE_VALUES).hasMoreValues();
>  }
>  results.clear();
>  continue;
>  }
> // Ok, we are good, let's try to get some results from the main heap.
>  populateResult(results, this.storeHeap, scannerContext, current);
>  if (scannerContext.checkAnyLimitReached(LimitScope.BETWEEN_CELLS)) {
>  if (hasFilterRow) {
>  throw new IncompatibleFilterException(
>  "Filter whose hasFilterRow() returns true is incompatible with scans that 
> must "
>  + " stop mid-row because of a limit. ScannerContext:" + scannerContext);
>  }
>  return true;
>  }
> {code}
> If filterRowKey always return ture, then it skip to checkAnyLimitReached. For 
> batch/size limit, it is ok to skip as we don't read anything. But for time 
> limit, it is not right. If the filter always filter row key, we will stuck 
> here for a long time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339026#comment-16339026
 ] 

Appy commented on HBASE-17852:
--

Man(lightly shaking head side-to-side)...such strong responses when we are 
trying to scope out needed work/design changes for a better B&R in 2.1. Please 
work with me here..smile.

Why do you believe procv2 is new feature? It's being used for core HBase 
functionality - create, delete tables, etc since 1.2 release.
What would make it mature & robust enough for B&R in your opinion?


> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-17852-v10.patch, screenshot-1.png
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19854) Add a link to Ref Guide PDF

2018-01-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339012#comment-16339012
 ] 

Peter Somogyi commented on HBASE-19854:
---

I don't see the added value in this because the PDF is already linked on the 
website. Why isn't that sufficient?

> Add a link to Ref Guide PDF
> ---
>
> Key: HBASE-19854
> URL: https://issues.apache.org/jira/browse/HBASE-19854
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Laxmi Narsimha Rao Oruganti
>Priority: Minor
>
> Many a times, users want to have an offline copy of Ref Guide.  Some people 
> prefer to save HTML, and some people prefer it in PDF format.  Hence, Apache 
> HBase team generates PDF version of document periodically and keeps it 
> available at: [https://hbase.apache.org/apache_hbase_reference_guide.pdf]
> It would be good if a link to this URL is available in the online guide so 
> that users would become aware that there is a PDF version.  Right now, unless 
> some one explicitly looks for it using Google/Bing search, they would not 
> know.
>  
> As the PDF URL is fixed for latest documentation, it can be a static href.  
> However, I don't have any clues about how to make sure to get 
> "version-relevant" PDF link for archived ref guides.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19859) Update download page header for 1.1 EOL

2018-01-25 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16339008#comment-16339008
 ] 

Appy commented on HBASE-19859:
--

Isn't 1.4.x current stable release line?

> Update download page header for 1.1 EOL
> ---
>
> Key: HBASE-19859
> URL: https://issues.apache.org/jira/browse/HBASE-19859
> Project: HBase
>  Issue Type: Task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-19583.patch
>
>
> See example mirror: http://mirrors.ocf.berkeley.edu/apache/hbase/
> They still claim that 1.1 is under active development.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-8963) Add configuration option to skip HFile archiving

2018-01-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338985#comment-16338985
 ] 

Jean-Marc Spaggiari commented on HBASE-8963:


Folks, any idea if this will one day be done? Just got to drop a 400 000 HFiles 
table, it takes a while ;) Just want to skip this move...

> Add configuration option to skip HFile archiving
> 
>
> Key: HBASE-8963
> URL: https://issues.apache.org/jira/browse/HBASE-8963
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: 8963-v10.txt, HBASE-8963.trunk.v1.patch, 
> HBASE-8963.trunk.v2.patch, HBASE-8963.trunk.v3.patch, 
> HBASE-8963.trunk.v4.patch, HBASE-8963.trunk.v5.patch, 
> HBASE-8963.trunk.v6.patch, HBASE-8963.trunk.v7.patch, 
> HBASE-8963.trunk.v8.patch, HBASE-8963.trunk.v9.patch
>
>
> Currently HFileArchiver is always called when a table is dropped or compacted.
> A configuration option (either global or per table) should be provided so 
> that archiving can be skipped when table is deleted or compacted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19811) Fix findbugs and error-prone warnings in hbase-server (branch-2)

2018-01-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338959#comment-16338959
 ] 

Peter Somogyi commented on HBASE-19811:
---

Can someone take a look on the attached addendum?

> Fix findbugs and error-prone warnings in hbase-server (branch-2)
> 
>
> Key: HBASE-19811
> URL: https://issues.apache.org/jira/browse/HBASE-19811
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0-beta-1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 2.0.0-beta-2
>
> Attachments: 1-HBASE-19811.branch-2.002.patch, 
> HBASE-19811.branch-2.001.patch, HBASE-19811.branch-2.001.patch, 
> HBASE-19811.branch-2.002.patch, HBASE-19811.branch-2.ADDENDUM.patch, 
> HBASE-19811.master.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19859) Update download page header for 1.1 EOL

2018-01-25 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338957#comment-16338957
 ] 

Peter Somogyi commented on HBASE-19859:
---

+1

> Update download page header for 1.1 EOL
> ---
>
> Key: HBASE-19859
> URL: https://issues.apache.org/jira/browse/HBASE-19859
> Project: HBase
>  Issue Type: Task
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Attachments: HBASE-19583.patch
>
>
> See example mirror: http://mirrors.ocf.berkeley.edu/apache/hbase/
> They still claim that 1.1 is under active development.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19750) print pretty rowkey in getExceptionMessageAdditionalDetail

2018-01-25 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338948#comment-16338948
 ] 

Jingcheng Du commented on HBASE-19750:
--

Thanks [~suxingfate] for the patch. +1 on the 005 patch.

Hi [~carp84], what's your idea on the latest patch? Thanks.

> print pretty rowkey in getExceptionMessageAdditionalDetail
> --
>
> Key: HBASE-19750
> URL: https://issues.apache.org/jira/browse/HBASE-19750
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: HBASE-19750.001.patch, HBASE-19750.002.patch, 
> HBASE-19750.003.patch, HBASE-19750.004.patch, HBASE-19750.005.patch
>
>
> Sometimes the rowkey is binary format and is not able to print out human 
> readable string. In this case, the exception will still try to call the 
> toString() method and result in something like '�\(�\'. 
> It will be very inefficient to trouble shooting the issue when we get such 
> kind of exception. We can't identify the problematic row key based on the 
> printout. 
> The idea here is that print out the rowkey use Bytes.toStringBinary() in 
> additional with Bytes.toString(row). 
> If the row is serialized from human readable string, then Bytes.toString(row) 
> makes more sense. When it's from human unreadable string, then 
> Bytes.toStringBinary(row) will help.
> The output of Bytes.toStringBinary(row) anyway can be applied to hbase shell 
> to do the scan so that we can easily identify the corresponding row.
> {code:java}
> 2017-12-16 07:25:41,304 INFO [main] 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl: recovered from 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=36, exceptions:
> Sat Dec 16 07:25:41 GMT-07:00 2017, null, java.net.SocketTimeoutException: 
> callTimeout=25, callDuration=250473: row '�\(�\' on table 'mytable' at 
> region=mytable,\xDF\x5C(\xF5\xC2\x8F\x5C\x1B,1412216342143.5d74ce411eecd40001d9bf6e62f0b607.,
>  hostname=mycluster.internal.xx.com,60020,1503881012672, seqNum=6265890293
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:203)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:403)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:205)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147)
>   at 
> org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)
>   at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
>   at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> {code}
> Current code:
> RegionServerCallable.java
> {code:java}
>  public String getExceptionMessageAdditionalDetail() {
> return "row '" + Bytes.toString(row) + "' on table '" + tableName + "' at 
> " + location;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19082) Implement a procedure to convert RS from DA to S

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338939#comment-16338939
 ] 

Duo Zhang commented on HBASE-19082:
---

This procedure should be implemented first.

At RS side, we need to reopen all the regions for this peer, and disable 
read/write from client, but still accept write request from replication.

> Implement a procedure to convert RS from DA to S
> 
>
> Key: HBASE-19082
> URL: https://issues.apache.org/jira/browse/HBASE-19082
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Priority: Major
>  Labels: HBASE-19064
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15381) Implement a distributed MOB compaction by procedure

2018-01-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338936#comment-16338936
 ] 

Hadoop QA commented on HBASE-15381:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-15381 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-15381 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12799714/HBASE-15381.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/11191/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Implement a distributed MOB compaction by procedure
> ---
>
> Key: HBASE-15381
> URL: https://issues.apache.org/jira/browse/HBASE-15381
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Major
> Attachments: HBASE-15381-v2.patch, HBASE-15381-v3.patch, 
> HBASE-15381-v4.patch, HBASE-15381-v5.patch, HBASE-15381-v6.patch, 
> HBASE-15381.patch, mob distributed compaction design-v2.pdf, mob distributed 
> compaction design.pdf
>
>
> In MOB, there is a periodical compaction which runs in HMaster (It can be 
> disabled by configuration), some small mob files are merged into bigger ones. 
> Now the compaction only runs in HMaster which is not efficient and might 
> impact the running of HMaster. In this JIRA, a distributed MOB compaction is 
> introduced, it is triggered by HMaster, but all the compaction jobs are 
> distributed to HRegionServers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15381) Implement a distributed MOB compaction by procedure

2018-01-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338934#comment-16338934
 ] 

Jean-Marc Spaggiari commented on HBASE-15381:
-

[~jmhsieh] [~stack] [~jingcheng...@intel.com] guys any chance to see that done 
soon? Just looking at a cluster where 90% of the workload is MOBs and 
compaction designed on the master sounds a bit strange. When you add servers or 
lost servers and need to get back locality, etc.

> Implement a distributed MOB compaction by procedure
> ---
>
> Key: HBASE-15381
> URL: https://issues.apache.org/jira/browse/HBASE-15381
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
>Priority: Major
> Attachments: HBASE-15381-v2.patch, HBASE-15381-v3.patch, 
> HBASE-15381-v4.patch, HBASE-15381-v5.patch, HBASE-15381-v6.patch, 
> HBASE-15381.patch, mob distributed compaction design-v2.pdf, mob distributed 
> compaction design.pdf
>
>
> In MOB, there is a periodical compaction which runs in HMaster (It can be 
> disabled by configuration), some small mob files are merged into bigger ones. 
> Now the compaction only runs in HMaster which is not efficient and might 
> impact the running of HMaster. In this JIRA, a distributed MOB compaction is 
> introduced, it is triggered by HMaster, but all the compaction jobs are 
> distributed to HRegionServers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338935#comment-16338935
 ] 

Guanghao Zhang commented on HBASE-19857:


+1

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-19857-HBASE-19064.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19854) Add a link to Ref Guide PDF

2018-01-25 Thread Laxmi Narsimha Rao Oruganti (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338923#comment-16338923
 ] 

Laxmi Narsimha Rao Oruganti commented on HBASE-19854:
-

Hi [~psomogyi] - Thanks for taking this up.  I wanted to see if this awareness 
can be added in the first page of the HTML version of the book.  As mentioned, 
we reach to HTML Version of the book from Google/Bing search.  Many do not know 
that there exists a PDF Version.

> Add a link to Ref Guide PDF
> ---
>
> Key: HBASE-19854
> URL: https://issues.apache.org/jira/browse/HBASE-19854
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Laxmi Narsimha Rao Oruganti
>Priority: Minor
>
> Many a times, users want to have an offline copy of Ref Guide.  Some people 
> prefer to save HTML, and some people prefer it in PDF format.  Hence, Apache 
> HBase team generates PDF version of document periodically and keeps it 
> available at: [https://hbase.apache.org/apache_hbase_reference_guide.pdf]
> It would be good if a link to this URL is available in the online guide so 
> that users would become aware that there is a PDF version.  Right now, unless 
> some one explicitly looks for it using Google/Bing search, they would not 
> know.
>  
> As the PDF URL is fixed for latest documentation, it can be a static href.  
> However, I don't have any clues about how to make sure to get 
> "version-relevant" PDF link for archived ref guides.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19079) Implement a procedure to convert RS from DA to A

2018-01-25 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338922#comment-16338922
 ] 

Duo Zhang commented on HBASE-19079:
---

Actually there is no big difference between adding a normal replication peer 
and a sync replication peer. Maybe the only difference is that we need to 
reject replication request from other cluster when in the DA state, and this 
will be addressed by HBASE-19782. So here I changed to title to 'convert RS 
from DA to A'.

> Implement a procedure to convert RS from DA to A
> 
>
> Key: HBASE-19079
> URL: https://issues.apache.org/jira/browse/HBASE-19079
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Priority: Major
>  Labels: HBASE-19064
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19079) Implement a procedure to convert RS from DA to A

2018-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19079:
--
Summary: Implement a procedure to convert RS from DA to A  (was: Implement 
a procedure to convert normal RS to state DA)

> Implement a procedure to convert RS from DA to A
> 
>
> Key: HBASE-19079
> URL: https://issues.apache.org/jira/browse/HBASE-19079
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Duo Zhang
>Priority: Major
>  Labels: HBASE-19064
> Fix For: 3.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19857) Complete the procedure for adding a sync replication peer

2018-01-25 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19857:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> Complete the procedure for adding a sync replication peer
> -
>
> Key: HBASE-19857
> URL: https://issues.apache.org/jira/browse/HBASE-19857
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-19857-HBASE-19064.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >